Quantitative observation for manufacturing cannot rely solely on human senses, such as vision, hearing, touch, etc., but requires special measuring instruments. Measurement refers to the numerical representation of the size of an object based on a certain standard (unit).
What Is Measurement?
To measure the size of something you need to compare the object being measured with a reference object. Measuring instruments used as reference objects cover a wide variety of products according to their measurement purpose, method, and accuracy. By measuring the dimensions correctly, it is possible to check whether the manufacturer meets the required specifications (within tolerance). Therefore, accurate measurement is a basic condition to confirm whether a product is manufactured correctly.
The Importance of Measurement:
Correct dimensional measurement is a basis of manufacturing. From material procurement, processing, assembly, and quality inspection to shipment, all processes are measured using the same benchmarks, to produce products that meet the design and ensure their quality.
If measurement procedures are not performed at any stage in production, quality cannot be guaranteed. If defective products are included in products sold, it will lead to customer complaints. Measurements must be performed correctly at all stages of the manufacturing process.
Manufacturers need to have standardized measurement technology to properly manage and use measuring instruments. Measurement management is essential to quality management. One standardized measurement management system developed in recent years is ISO10012.
What Measurement Methods are Available?
Size measurement methods are divided into direct measurement and indirect measurement.
- Direct measurement: Also known as an absolute measurement, is a method of directly measuring the target using measuring instruments such as vernier calipers, or three-dimensional measuring instruments. This method can perform a wide range of measurements within the scale range of the measuring instrument, but it may lead to erroneous measurements due to misinterpretation of the scale.
- Indirect measurement: It is a method of comparing objects to another object with a corresponding reference size, and calculating the difference between it and the target. It is therefore also called comparative measurement.
Benchmarks for Length Units:
- Human benchmark: The method of determining basic units of length has changed greatly over time. A long time ago, the human body was used as a benchmark. For example, the distance from the elbow to the fingertip was a cubit, but this length varied by region. To this day, countries such as the United States still use length units derived from the human body, such as yard, feet, and inch.
- Earth data: As navigation began to flourish in Western Europe, it became necessary to unify the unit length of the whole world. In the 17th century, Europe began to discuss a unified unit. After more than a century of discussion, in 1791 France proposed the Meter (Greek meaning measurement) unit. At that time, the benchmark was the meridian distance from the earth's north pole to the equator, and one ten-millionth of it was taken as 1 meter. Later, in the late 19th century, due to the need to integrate the world's dimensional benchmarks, France made a platinum-iridium alloy bar which was designated as the world’s metric standard.
- The speed of light benchmark: The meter unit based on the earth was questioned from the beginning because it had been difficult to measure. The metric standard also had problems with standardization over the years. Therefore, discussions were begun to establish a new benchmark. At the International Conference on Weights and Measures (CGPM) held in 1960, it was stipulated that the orange wavelength emitted by the element krypton 86 in a vacuum was to be the benchmark for 1 meter. In 1983, due to advancements in laser technology, it was determined that the length of 1 meter would be based on the speed of light and time. At that time, it was determined that light travels a distance of one meter in 1/299,792,458th of a second in a vacuum. This has now become the benchmark definition for 1 meter today.
International Unit System:
The International System of Units (SI) was established by the International Conference on Weights and Measures (CGPM) in 1960. In the International System of Units, length is measured in meters as the SI base unit.
What Kinds of Errors are There?
The error of the length refers to the difference between the actual value of the target and the measured value, or the difference between the specified value and the measured value, with "Error = Measured value - True value". No matter how high the measurement precision is, it is difficult to obtain the true value. To prevent errors from occurring, various conditions must be considered.
- System error: The error caused by the deviation of the measured value due to a specific reason. For example, errors due to individual differences in measuring instruments (device differences), temperature, measurement methods, etc.
- Occasional errors: Errors that occur accidentally during measurement. For example, dust adhering to the measuring instrument causes errors, etc.
- Negligence error: The error caused by the lack of experience of the measurer or the operation error.
What are the Factors of Error?
- Error due to temperature: The volume of the object or its length can change due to temperature changes. This situation occurs on both the measured target and the measuring instrument. The change in temperature and the corresponding change in length of an object can be expressed by the coefficient of thermal expansion. The thermal expansion coefficient varies depending on the type of material. Therefore, the International Standards Organization has set the standard temperature at which length measurement are to be taken at as 20°C.
- Error due to material deformation: Applying force on an object will cause a certain amount of change, and the object may or may not return to its original state when the force is removed. This kind of object change is called elastic deformation. The force acting on an object is called stress, and it generally has a proportional relationship with the deformation of the object. The relationship between the two is expressed by the longitudinal elastic coefficient. As the stress increases, the amount of deformation also increases.
Measurement Principle: Abbe's Principle
Abbe's principle is an important guideline for explaining measurement accuracy and designing measuring instruments. Abbe's principle states: "to improve measurement accuracy, the measurement direction of the target being measured must be on the same line as the measuring instrument ".
For example, with the vernier caliper, the scale of the caliper is separated from the measurement position so does not conform to Abbe's principle. Micrometers follow Abbe’s principle because the measuring part of the micrometer is in line with the target being measured, therefore, the measurement accuracy of the outer diameter is high.
What are Tolerances?
In any case, there will always be some error between the measured value and the actual value. But it is important to clearly define the allowable error range. In the field of measurement, the difference between the maximum allowable size and the minimum allowable size of on object is called the "tolerance". Legally recognized error ranges, such as those listed in industrial specifications, are also known as tolerances.
In the actual drawing, if the dimension is written as "60 (+0.045 -0.000)”, it means that the reference size is 60, the upper limit is 60.045 and the lower limit is 60.000.
The rationale for setting tolerances in practice is to strike a balance between accuracy and machining costs. To improve the accuracy, the machining cost will also increase relatively. The balance between quality and cost can be achieved by designing individual tolerance values with individual workpieces.
What is Cooperation?
Another reason to set tolerances is that dimensional differences must be specified when combining multiple parts such as shafts and holes. This is called mating or fitting. When reviewing the fit, the way of thinking about the measurement will be different depending on the axis of the hole. Considering the diameter of the shaft as a reference, if the shaft only needs to be able to pass through the hole, use a clearance fit. Use an interference fit if the shaft is to be inserted into the hole and then secured. If it is a datum between the above two, use a transition fit.
Analog and Digital Measuring Instruments:
In recent years, measuring instruments have continued to be digitalized. For example, there are vernier calipers with digital counters. In the past, it took practice to correctly read the sub-scale of a vernier caliper, but a digital vernier caliper can instantly display the value to several 1/100ths of a unit. 91Ƶever, digital measuring instruments also have their drawbacks. The numerical value of the digital measuring instrument may exceed the accuracy range. If the force during operation is slightly increased or decreased, the numerical display may change. Especially for measuring instruments that can measure to 1/1000th of a unit, depending on the measurement target, it may be impossible to determine which value should be selected because the measurement value may not be stabilized.
Depending on the job being measured, it is sometimes easier to intuitively grasp dimensions using an analog measuring instrument. Therefore, appropriate analog and digital measuring instruments should be used, respectively, according to the application and the required accuracy.
Metrological Traceability:
To ensure food safety, the production history system has been continuously strengthened in recent years to track the process of raw material cultivation, transportation, processing, packaging, and shipment. This is called traceability of food (history management). In the field of measurement, the way of thinking about traceability has also begun to be emphasized. This way of thinking is called metrological traceability, and it proves that measurements made daily are within tolerance.
International Standardization is Ongoing:
Metrological traceability has now been incorporated into international standards which have been set by international research institutions such as the International Committee on Weights and Measures (CIPM), as well as national standards set by national research institutions. With the pace of economic globalization, the requirements for compliance with measurement traceability are gradually increasing.
The International Bureau of Weights and Measures (BIPM), a research organization under the International Committee of Weights and Measures, is engaged in basic research on the International System of Units (SI). With the globalization of manufacturing, compliance with international standards is necessary. For dimensional measurement results to be consistently recognized anywhere in the world, metrological traceability through international mutual recognition is essential.
What are the Classifications of Electronic Instruments?
In a broad sense, electronic measuring instruments refer to instruments that use electronic technology for measurement and analysis. According to the functions of the measuring instruments, electronic measuring instruments can be divided into two categories: dedicated and general. Special electronic measuring instruments are designed and manufactured for specific purposes and are suitable for the measurement of specific objects. General-purpose electronic measuring instruments are designed to measure one or some basic electrical parameters. They are suitable for a variety of electronic measurements and can be subdivided into many types according to their functions.
Dedicated Electronic Measuring Instruments:
- Audio/video analyzer: Audio/video signal generator, TV analyzer, video analyzer, audio analyzer, bit error analyzer/bit error analyzer.
- Optical communication tester: Spectrum analyzer, digital transmission analyzer, optical network analyzer, optical time-domain reflectometer, optical power meter/power probe, bit error tester, optical attenuator, light source, optical oscilloscope.
- RF and microwave instruments: Spectrum analyzer, network analyzer, impedance analyzer, signal generator digital oscilloscope, noise figure analyzer, cable/antenna analyzer, modulation analyzer, power meter/power probe, frequency meter, LCR table.
- Wireless communication tester: Mobile phone comprehensive tester, TDMA tester, radio comprehensive tester, PDC/PHS tester, antenna feeder tester, 3G tester, DECT tester, Bluetooth comprehensive tester electronic load.
General Electronic Measuring Instruments:
- Signal generator: It is used to provide signals required for various measurements. According to different uses, there are signal generators with different waveforms, different frequency ranges, and various powers, such as low-frequency signal generators, high-frequency signal generators, and function signals. Types of generators include, Pulse Signal Generators, Arbitrary Waveform Signal Generators, and RF Synthesized Signal Generators.
- Voltage measuring instruments: Used to measure the voltage, current, level, and other parameters of electrical signals, such as ammeters, voltmeters (including analog voltmeters and digital voltmeters), multimeters, etc.
- Time and frequency measuring instruments: Used to measure parameters such as frequency, time interval, and phase of electrical signals, such as various frequency meters, phase meters, wavelength meters, and various time and frequency standards.
- Signal analysis instruments: Used to observe, analyze, and record changes in various electrical signals, such as various oscilloscopes (including analog oscilloscopes and digital oscilloscopes), waveform analyzers, distortion analyzers, harmonic analyzers, spectrum analyzers, and logic analyzer instruments and so on.
- Electronic component testing instrument: Used to measure the electrical parameters of various electronic components and check whether they meet the requirements. The different test objects can be divided into transistor testers, integrated circuit (analog, digital) testers, and circuit component testers.
- Radio wave characteristic tester: Used to measure parameters such as radio wave propagation and interference intensity, such as test receivers, field strength meters, interference testers, etc.
- Network characteristic test instruments: Instrument used to measure the frequency characteristic, impedance characteristic, power characteristic, etc. of the electrical network, such as impedance testers, frequency characteristic testers, network analyzers, noise figure analyzers, etc.
- Auxiliary instruments: Instruments used in conjunction with the above instruments, such as various amplifiers, attenuators, filters, recorders, and various AC and DC stabilized power supplies.