Is Measuring a Length of Time Continuous Measurement
Continuous Measurement
TGA can provide continuous measurement based on weight changes of materials in the process of heating during the measurement.
From: Nanomaterials and Devices , 2015
A STUDY OF SOLAR WATER PUMPING PARAMETERS FOR BAGHDAD AREA
AHMED M. HASSON , ... BASSIM E. AL-SAGIR , in Energy and the Environment, 1990
ABSTRACT
Continuous measurements of global solar radiation incident on horizontal surface for Baghdad area during the period from January 1977 to December 1987. Climatological data were recorded to determine the crop water requirements, total water discharge and photovoltaic surface area.
For global solar radiation empirical relationships between them were established. Also the pump power and total water discharge and photovoltaic surface area with total head of 10, 20 and 30 m were derived from correlating the above parameters.
The results show that, the global solar radiation incident on different inclined surfaces could be successfuly predicted when only global solar radiation incident on horizontal surface HT is available. Additionally the pump power required to pump water from different water coulmn depths can be estimated.
With the current prices of fully tracking solar cells photovoltaic water pumping seems to be very expensive compared with the fixed photovoltaic systems, especially when both systems have to be imported by developing counteries.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080375397500946
Crack Growth Measurement
K.K. Ray , in Encyclopedia of Materials: Science and Technology, 2001
7 Experimental Measurement
Continuous measurement of crack growth is required for generation of data in fatigue, creep, stress corrosion or their interactions, such as creep-fatigue, corrosion fatigue, etc. These data are subsequently analysed as crack growth rate (d a/dN or da/dt) versus stress range, strain range or stress intensity factor range to assess material-characterizing parameters for predicting life of structural components. In tests related to determination of fracture toughness criteria, the specimens are usually fatigue pre-cracked except for chevron notch fracture testing (ASTM E-1304-89, American Society for Testing and Materials 1993), and accurate measurement of the average length of the fatigue precracked region is required. This average crack length together with the critical load at fracture assists one to calculate plane strain fracture toughness of a material. The need for critical assessment of the crack length in static loading is particularly important in elastic–plastic fracture mechanics where slow crack growth precedes final catastrophic failure.
Intermittent measurement of crack length may be required to monitor stable crack growth in monotonic loading for determination of EPFM criteria or for estimation of subcritical crack growth in aggressive environments to determine environment-assisted fracture toughness. The fracture toughness and fatigue tests are commonly carried out using standard fracture mechanics type specimen geometry such as compact tension, middle tension, edge-notched bend, edge-notched tension, etc. The description of these specimen geometries and the fixtures for these tests are available in ASTM standards E 399-90, E 647-93, E 1290-93, E 561-92a, E 1152-87 and E 1457-92 and so forth (American Society for Testing and Materials 1993).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0080431526003132
Modeling of interface pressure profile generated over time
Bipin Kumar , ... R. Alagirusamy , in Science of Compression Bandages, 2014
9.1 Introduction
Continuous measurement of the interface pressure helps in understanding the compression management in a better manner and is also very useful for obtaining pressure profile generated by the bandage. It has been observed from the previous chapters that the interface pressure decreases over a period of time, hence decreasing the effectiveness of the treatment. Knowledge of the pressure profile, to which the bandage is exposed during compression therapy, is of theoretical and practical importance in determining the efficacy of the treatment.
It has been demonstrated from the previous chapters that the decrease in the internal pressure beneath the bandage occurs because of relaxation of the stress in the bandage. The stress relaxation of a material is a viscoelastic property which refers to the behavior of internal stress reaching a peak and then reducing or relaxing over time under a fixed level of elongation. The relaxation behavior of the fibrous materials is usually described by two basic elements: the spring and the dashpot [1,2]. The spring describes the linear elastic behavior while the dashpot represents the viscous behavior of the Newtonian fluid. By making different combinations of spring and dashpot models, one can simulate the relaxation behavior of the fibrous materials such as yarn and fabric [3–5]. The quasi-linear viscoelastic (QLV) theory is frequently used in biomechanics to model the nonlinear, history-dependent viscoelastic behaviour of soft tissues and tendons [6–10]. The QLV theory could also be used to model the nonlinear viscoelastic behavior of the fibrous materials.
In the present chapter an attempt has been made to model the relaxation phenomena in the bandage during its application to permanent deformation. Various mechanical models with various combinations of linear or nonlinear spring and dashpot as well as the QLV model have been used to describe the relaxation behavior of the bandage. In this chapter, the pressure profile of the bandage has been obtained from the above developed mechanical models used to describe the stress relaxation behavior.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781782422686500096
RADIATION DATA
J.K. Page , in Solar Energy Conversion II, 1981
CONCLUSION
The continuous measurement of radiation in connection with solar energy experiments is a difficult scientific task which requires adequate education, adequate training and adequate instrumentation. Furthermore, it is an area where the process of data reduction is very time consuming and the selection of the appropriate data processing equipment must be made in conjunction with the selection of the instruments. It is simply not possible to compile an accurate radiation record at very low costs regardless of how desirable people may feel that this could be. When one evaluates the costs of actual experiments together with the cost of the time of those that make the experiments, it will seldom be found that compromising on basic radiation instrumentation is a sound way forward. Attempts to devise instruments to measure radiation by amateurs must be strongly discouraged because the problems involved are so complicated and difficult that it requires a lifetime of experience to make reasonably accurate instruments in this very difficult field. Not only is one measuring very small amounts of energy, but one is measuring a rapidly varying signal. Furthermore, the spectral content of the signal is varying and also the relative balance between direct and diffuse radiation is changing. The weather is acting on the outside of the instrument all the time tending to cause deterioration. The properties of the glass domes and so on are extremely critical and a great deal of valuable time in solar experimentation has been lost by people making inadequate decisions concerning instrumentation, or having made adequate decisions about instrumentation failing to set up a proper supervisory system to ensure that the results obtained are reliable. No one should be deceived by the apparent simplicity of radiation instruments and should only embark on such measurements with proper humility, otherwise the results will be disastrous. Proper attention to calibration procedures is perhaps most vital of all.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080253886500131
Alberta Oil Sands
X.L. Wang , ... K.E. Percy , in Developments in Environmental Science, 2012
8.3.3 Gas and PM Concentrations and Emission Rates
The continuous measurement devices detected short-duration stack emission changes indicative of process and control device variations, as illustrated in Figure 8.7. Stack parameters and gas concentrations were relatively stable over the test periods, in contrast to real-time concentrations from heavy haulers in mining operations (Watson et al., 2010a). The PM spikes in Stack C corresponded to NO spikes, probably due to short-duration changes. Such correspondence was not found in Stack A or B samples during this study. Detection of short-term variations might be useful for explaining test-to-test differences in ERs and compositions.
Table 8.5 shows that CO2 had the highest ERs among all pollutants, ranging from 1.8 × 105 to 2.7 × 105 kg/h. While H2S ERs (0.003–0.038 kg/h) were low, large variations were found in NH3 ERs. Stack C emitted 1–2 orders of magnitude lower NH3, only 1.1% and 0.2% of levels in Stacks A and B, respectively, which explains the higher acidity of PM2.5. ERs ranged from 500–1599 kg/h for CO, 201– > 1050 kg/h for SO2, 132–295 kg/h for NO x , and 8–49 and 11–68 kg/h for PM2.5 and PM10, respectively.
Concentration (mg/m3) | Emission rates (kg/h) | ||||||
---|---|---|---|---|---|---|---|
Measured species | Stack A | Stack B | Stack C | Stack A | Stack B | Stack C | |
Gases | CO | 698 ± 27 | 929 ± 43 | 451 ± 74 | 1599 ± 54 | 980 ± 44 | 500 ± 73 |
CO2 | (1.14 ± 0.02) × 105 | (1.68 ± 0.03) × 105 | (2.40 ± 0.03) × 105 | (2.61 ± 0.05) × 105 | (1.77 ± 0.02) × 105 | (2.70 ± 0.05) × 105 | |
NO | 128 ± 4 | 126 ± 2 | 167 ± 11 | 295 ± 11 | 132 ± 2 | 187 ± 12 | |
NO2 | 0.0 ± 1.2 | 0.0 ± 0.7 | 0.0 ± 1.3 | 0.0 ± 2.7 | 0.0 ± 0.7 | 0.0 ± 1.4 | |
NO x | 128 ± 3 | 126 ± 2 | 166 ± 10 | 295 ± 11 | 132 ± 2 | 186 ± 11 | |
NH3 | 7.3 ± 1.1 | 82.0 ± 21.3 | 0.16 ± 0.01 | 16.6 ± 2.4 | 86.4 ± 22.9 | 0.18 ± 0.01 | |
SO2 | > 461 | 689 ± 122 | 177 ± 22 | > 1050 | 727 ± 132 | 201 ± 28 | |
H2S | 0.017 ± 0.008 | 0.005 ± 0.001 | 0.003 ± 0.001 | 0.038 ± 0.017 | 0.005 ± 0.002 | 0.003 ± 0.001 | |
PM | PM1 | 13.3 ± 1.1 | 5.6 ± 0.1 | 36.7 ± 2.7 | 30.6 ± 2.7 | 5.9 ± 0.1 | 41.3 ± 3.4 |
PM2.5 | 21.5 ± 2.1 | 7.6 ± 0.3 | 38.0 ± 3.0 | 49.1 ± 5.0 | 8.0 ± 0.3 | 42.7 ± 3.8 | |
PM10 | 29.6 ± 3.5 | 10.1 ± 1.2 | 38.2 ± 3.2 | 68.0 ± 8.4 | 10.7 ± 1.4 | 43.0 ± 4.0 | |
PM25 | 29.6 ± 3.5 | 10.3 ± 1.4 | 38.2 ± 3.2 | 68.1 ± 8.4 | 10.9 ± 1.6 | 43.1 ± 4.0 |
Table 8.6 shows that for Stack A, the SO2 ER from dilution tests was one-eighth that from CEMS during the same sampling period because the potassium carbonate (K2CO3) on the backup filter was completely consumed by SO2. SO2 from CEMS and a 2007 compliance test differed < 2%. NO x and TSP ERs from dilution tests were 52% and 17%, respectively, of ERs from 2007 compliance tests. ERs from dilution sampling (except the unquantified SO2), CEMS, and compliance were well within Alberta's emission guidelines for these species. For Stack B, SO2 from dilution tests was ∼ 25% lower than the CEMS but was similar to the 2007 compliance tests. NO x by dilution sampling was 45% higher than that from the 2007 compliance test. The TSP by dilution sampling was 21% of the hot filter catch and 3% of the total TSP from the compliance tests. The Stack B total TSP would exceed the emission guideline value by 51%, if the impinger catch was included. For Stack C, TSP by dilution sampling was 16% lower than the 2010 compliance test result, and both NO x and TSP were < 16% of emission guidelines. The discrepancy in TSP between dilution sampling and compliance tests might be partially attributed to losses of > 15 μm particles in dilution sampling, positive artifacts from the impinger catch, and emission variations between the 2007 and 2008 testing periods.
Stacks/species | NO x a | SO2 | TSP-front | TSP-total b | |
---|---|---|---|---|---|
Stack A | Dilution | 452 ± 17 | > 1050 | NA c | 68 ± 8 |
CEMS | NA | 8699 ± 157 | NA | NA | |
Compliance | 870 ± 69 | 8843 ± 431 | NA | 392 ± 48 | |
Guideline | 1500 | 16,400 | NA | 600 | |
Stack B | Dilution | 203 ± 4 | 727 ± 132 | NA | 10.9 ± 1.6 |
CEMS | NA | 951 ± 288 | NA | NA | |
Compliance | 140 ± 19 | 741 ± 239 | 52 ± 10 | 378 ± 50 | |
Guideline | NA | NA | NA | 250 | |
Stack C | Dilution | 284 ± 17 | 201 ± 28 | NA | 43.1 ± 4.0 |
CEMS | NA | NA | NA | NA | |
Compliance | NA | NA | NA | 51.09 | |
Guideline | 1800 | NA | NA | 340 |
- a
- Compliance NO x was measured by Alberta Method 7A, where NO x in the flue gas sample was oxidized to nitrate, measured by ion chromatography (IC), and reported in term of NO2. To be comparable to the compliance tests, NO x from dilution sampling is also reported as NO2 by: ER (NO2) = ER (NO) × 46/30.
- b
- The compliance tests measured TSP emissions based on Alberta Method 5: TSP-front: hot filter catch; TSP-total: hot filter and impinger catches. The dilution sampling TSP uses PM25 by the OPC.
- c
- Data not available.
Table 8.7 compares particle size distributions from this study with those measured from a 2002 in-stack survey. The methods are different, and the processes are probably better controlled for the 2008 tests, so a precise correspondence is not expected. The PM2.5 and PM10 ERs from the in-stack survey were, respectively, 63% and 15% lower than those by the dilution method, while TSP was 51% higher. The in-stack hot filter did not collect particles that would nucleate and grow upon cooling and therefore underestimated fine particle concentrations. This is probably why the hot filter PM2.5 was so much lower than the dilution sample. The in-stack TSP (> 10 μm) fraction is uncertain and subject to contamination, as it was recovered by washing the sampling probe and cyclone with acetone (U.S. EPA, 2010). However, losses for particles > 15 μm in the dilution sampling method were not accounted for, and the OPC only measures particles ≲ 25 μm.
Test name a | Test date | PM2.5 ER (kg/h) | PM10 ER (kg/h) | TSP ER (kg/h) |
---|---|---|---|---|
In-stack survey | 5/1/2002–5/2/2002 | 18 ± 2 | 58 ± 10 | 103 ± 27 |
This study | 8/9/2008–8/11/2008 | 49 ± 5 | 68 ± 8 | 68 ± 8 |
- a
- The in-stack sampling followed a setup similar to the U.S. EPA Method 201A by installing both a PM10 and PM2.5 in-stack cyclones (U.S. EPA, 2010). The condensable fraction captured in the impingers was not accounted for in the in-stack test data.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080977607000081
Chemical Analysis
W.G. Cummings , ... I. Verhappen , in Instrumentation Reference Book (Fourth Edition), 2010
24.3.4.6 Salt-in-Crude-Oil Monitor
A rapid continuous measurement of the salt in crude oil before and after desalting is based on the measurement of the conductivity of a solution to which a known quantity of crude oil has been added. The sample of crude oil is continuously circulated through a loop in the measurement section of the salt-in-crude monitor. When the test cycle is initiated, solvent (xylene) is introduced from a metering cylinder into the analyzer cell. A sample is then automatically diverted from the sample circulating loop into a metering cylinder calibrated to deliver a fixed quantity of crude oil into the analysis cell. A sample is then automatically diverted from the sample circulating loop into a metering cylinder calibrated to deliver a fixed quantity of crude oil into the analysis cell. A solution containing 63 percent n-butanol, 37 percent methanol, and 0.25 percent water is then metered into the analysis cell from another calibrated cylinder.
The cell contents are thoroughly mixed by a magnetic stirrer; then the measuring circuit is energized and an ac potential is applied between two electrodes immersed in the liquid. The resulting ac current is displayed on a milliammeter in the electrical control assembly, and a proportional dc millivolt signal is transmitted from the meter to a suitable recorder.
At the end of the measuring period, a solenoid valve is opened automatically to drain the contents of the measuring cell to waste. The minimum cycle time is about 10 minutes.
Provision is made to introduce a standard sample at will to check the calibration of the instrument. Salt concentrations between 1 and 200 kg salt per 1,000 m3 crude oil can be measured with an accuracy of ±5 percent and a repeatability of 3 percent of the quantity being measured.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750683081000243
Applications of Radioisotope Instruments in Industry
J.F. Cameron , C.G. Clayton , in Radioisotope Instruments, 1971
Tin coatings
For the continuous measurement of the thickness of coatings of tin on steel, β-backscatter gauges are in use which are similar to those described above for zinc. For coating thicknesses in the range 0.4–3 μm an accuracy of ±0.02 μm is obtained with a 150 mCi 147Pm source and a time constant of 10 sec. (8)
Radioisotope X-ray fluorescence gauges using 3H/Zr bremsstrahlung sources to excite Fe K X-rays have also been developed for measuring tin-plate. (9, 10) In extending this instrument to a continuous tin-plate line an ionization chamber was preferred as it could be adapted to standard industrial indicating units. (11) The measuring heads of a gauge which measures tin thickness on both sides of steel strip are shown in Fig. 2.14. Fifteen 3H/Zr sources with a total activity of 40 Ci are mounted in a row along the centre of each detector window. To standardize the gauge, the heads are withdrawn to the side of the strip where they are positioned in front of steel reference plates. With a time constant of 10 sec, variations of tin thickness of ±1 % in the range 0·1 to 1·5 lb per basis box can be detected.
A portable instrument, based on the same technique and designed specifically to measure the thickness of coatings of tin on steel, is shown in Fig. 3.64. One measurement can be made in a time of between 5 and 10 sec and an accuracy within ± 1 % of coating weight can be achieved. The instrument which operates over a sample area of 4 in2 consists essentially of an ionization chamber detector and four 2·5 Ci 3H/Zr sources mounted on the ionization chamber window. The output from the ionization chamber is coupled directly to a solid-state electrometer amplifier which produces an output voltage which decreases as the coating weight increases. This output voltage is compared with a potential derived from a digital "set-weight" dial, a centre-zero meter indicating any out-of-balance. The design leads to a very simple operation and the construction is very robust and readily stands up to the rigours of use in a steel works adjacent to a coating line.
Although the β-backscatter method is also used for tin-plate thickness measurement, the sensitivity of the method is low compared to that of an X-ray fluorescence system and highly stable, sensitive instruments are therefore required. In addition, errors can arise in the β-backscatter method due to magnetism and variations in hardness of the steel, changes in air density and the presence of mineral oils which are often used to preserve surface finish. The X-ray fluorescence technique is not affected to the same degree and is now the preferred method for measurements of this type.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080158020500082
Real-time signal processing of guided waves acquired on high-speed trains for health monitoring of bogie systems
M. Hong , ... L.-M. Zhou , in Recent Advances in Structural Integrity Analysis - Proceedings of the International Congress (APCF/SIF-2014), 2014
4.3 Temperature variation and baseline compensation
Temperature variation in continuous measurement can create discrepancies between a current signal Y from its corresponding baseline signal X even when there is no damage in the bogie, manifested by time shifting and/or different amplitude scales, which leads to outdated benchmarking. This is because X is pre-acquired under specific environmental conditions, which may vary significantly from the current conditions when Y is recorded. In order to compensate for continuous changes in temperature, the compensated baseline Xcomp can be defined as:
(1)
where is the time shift at which the cross correlation of X and Y reaches its maximum, E signifies the expected value, μ is the mean value, and Xτ hence denotes the lagged baseline signal. here is a scaling ratio between the magnitudes of Y and Xτ , which is also utterly attributed to ambient effects, so that after such an adjustment, the rescaled, time-shifted baseline signal Xcomp eliminates discrepancies created by the changing temperature and is paired to its corresponding current signal Y as if it were measured under the current condition.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780081002032500383
Clinical applications for imaging photoplethysmography
Sebastian Zaunseder , Stefan Rasche , in Contactless Vital Signs Monitoring, 2022
7.2 Patient monitoring and risk assessment
7.2.1 Current monitoring—target groups and technology
Patient monitoring denotes the (continuous) measurement of physiologic parameters over time. It aims at instantaneous estimates of a patient's condition, eventually generates alarms, and provides a base to take further actions (interventions, treatments). According to [42], at least three categories of patients need monitoring.
- 1.
-
Patients with compromised physiological regulation.
- 2.
-
Stable patients with a condition that could suddenly change to become life threatening.
- 3.
-
Patients in a critical state.
Generally, patients at intensive care units (ICUs), as well as patients during and after surgery or certain interventions, meet these requirements and are regularly monitored. According to the clinical field, patient conditions, and local practice, the extent of monitoring varies considerably. It ranges from intermittent non-invasive measurements up to invasive continuous measurements. Table 7.1 provides a comprehensive overview of clinically used vital parameters and common ways for their assessment during monitoring.
Vital parameter | Information content | Typical assessment |
---|---|---|
General monitoring [8,12] | ||
Body temperature | Hypothermia, fever | Thermistors, thermocouple |
Heart rate | Arrhythmia, abnormal heart rate | Electrocardiography, photoplethysmography |
Blood pressure | Global hemodynamics | Sphygmomanometer, arterial catheter |
Respiratory rate | Respiratory drive | Impedance pneumography, oronasal flow sensor, respiration belt |
Oxygen saturation | Respiratory function, vascular function | Pulse oximetry |
Extended general monitoring [8,12] | ||
Pain | Current discomfort | Multivariate personal assessment |
Level of consciousness | Cerebral processes | Multivariate personal assessment (e.g. AVPU, Glasgow coma scale) |
Urine output | Renal function, vascular function | Urinary catheter |
Hemodynamic monitoring [10,11,20,52] | ||
Electrical heart activity | Electrical conduction, repolarization, ischemia | Electrocardiography |
Cardiac output | Hemodynamic state | Indicator dilution (catheter), CO rebreathing |
Central venous pressure | Cardiac function | Venous catheter |
Pulmonary arterial pressure | Cardiac function | Pulmonal artery catheter |
Heart/valve mechanics | Intracardiac flow and filling, myocardial movement | Cardiac ultrasound |
Microcirculation | Local perfusion | Clinical signs, Laser-Doppler/Speckle, dark field imaging, tissue PCO2 |
Neurological Monitoring [34,37] | ||
Brain activity | Coma, epileptic seizures | Electroencephalogram |
Intracranial pressure | Fluid accumulation | Intracranial catheter |
Tissue oxygenation | Cerebral perfusion | NIRS, intracranial catheter |
Respiratory/metabolic monitoring [11] | ||
Breath carbon dioxide | Lung function | Capnography |
Blood gas | Lung function, gas exchange | Laboratory, multispectral PPG |
Blood sugar | Hypo-/hyperglycemia | Laboratory, catheter |
7.2.2 Patient monitoring by iPPG—measures of relevance
Motivation: Although clinically indispensable, current monitoring devices or equipment, respectively, pose inherent limitations and disadvantages. Invasive monitoring carries a particular risk of infection, vascular lesions, and thrombosis. Even non-invasive contact probes are a significant source of contamination and tissue damage. Contact-based monitoring further impedes the mobility of patients and causes discomfort. Depending on its complexity, the monitoring setup is time consuming and resource intensive. Finally, it can hamper the access to the patient for interventions and nursing. iPPG has significant advantages over current monitoring equipment and can overcome such limitations. At the same time, according to Table 7.1, it can yield multiple parameters that have fundamental importance for monitoring.
Monitoring of heart rate: Heart-rate monitoring is the genuine application of iPPG. It is the most fundamental and most widespread task. Consequently, there are a large number of works related to heart-rate monitoring [3,56]. Most works exploit the variation of light absorption associated with the blood volume pulse. However, the cardiac ejection and the traveling pulse wave also generate movements that can be analyzed to yield a pulse signal (global or local ballistocardiographic effects [56]). Recent developments have ensured that iPPG can be used for heart-rate monitoring even under difficult recording conditions in view of movements and light variations. Image processing has developed from simple static regions of interest to highly flexible time-dependent regions of interest (ROI). Signal processing's development has led to powerful methods of color-channel combinations, which can efficiently augment pulse even under challenging conditions. Though most works focus on mean heart rate, e.g., the heart rate averaged over some seconds, it is possible to detect single heart beats and reveal heart rate variability, as well [3,56,57].
Monitoring of respiratory rate: Similar to heart rates, monitoring of respiratory rates from videos has been under investigation for a long period. Even the respiration affects videos in different ways. It modulates the periodicity of heart beats, effects the vascular filling, and causes movements (see Fig. 7.1) [14,40]. Either effect, alone or in combination, can be used to derive respiration but the analysis of movements is more common.
Monitoring of oxygen saturation: Clinically, the monitoring of oxygen saturation is the most important application of contact photoplethysmography. It is thus natural to employ iPPG in a similar way. Wieringa et al. published early attempts of camera-based pulse oximetry [53]. Compared to hear-rate and respiratory monitoring, pulse oximetry by iPPG is more complex. On the one hand, RGB cameras do not provide an optimal base to determine the oxygen saturation because the wavelengths are below the isosbestic point. The combination of cameras operating at suitable wavelengths (e.g., red and near-infrared) overcomes this limitation. On the other hand, distortions heavily affect the signal morphology and can easily lead to erroneous readings. The progress in video processing, particularly the identification and detailed tracking of suitable ROIs, allows improved arterial oxygen-saturation (SpO2) monitoring [15]. Besides, simulation studies and advances in signal processing have improved SpO2 assessment techniques and finally proved its applicability [49,50].
However, compared to monitoring heart rate and respiratory rate, monitoring of oxygen saturation is technically much more demanding. Amongst others, the need for at least two wavelengths—preferably one of them in near-infrared range, which shows poor signal quality—and respective illumination concepts, as well as the need for proper calibration, are troublesome.
Monitoring of blood pressure: (Arterial) blood pressure monitoring by cameras has become a focus of iPPG in recent years. Blood pressure and its variations affect variable video characteristics. Morphological signal characteristics, in the simplest case the amplitude of iPPG signals, show variation with respect to the blood pressure or pulse pressure, respectively [36,47]. Other studies relate to the ideas of pulse-wave velocity and exploit temporal characteristics. Such approaches use phase shifts between multiple measurement sites [21] or combine ballistocardiographic signals and blood-volume signals [41]. Driven by the progress in the field of machine learning, even approaches that integrate various signal characteristics, eventually in a black box fashion, are in use to estimate the blood pressure [27].
Further available information: The aforementioned measures are most central to the research activities with respect to iPPG and fundamental to today's routine clinical monitoring (compare to Table 7.1). However, videos provide even more information with relevance for clinical monitoring.
- •
-
A couple of works detail methods for visual pain assessment exploiting facial expressions [16]. Even heart-rate variability, which can be derived from videos as well, was associated with pain and awareness [19]. In the same way, iPPG might add relevant information to general monitoring.
- •
-
Some works have highlighted the possibility of pulse-wave analysis (PWA) and pulse-wave decomposition from iPPG recordings [13]. Such analysis techniques might yield hemodynamic function beyond the blood pressure and thus be of importance for hemodynamic monitoring. In this context, even the analysis of the jugular venous pulse should be mentioned. The JVP is closely related to central venous pressure and right atrial pressure, respectively, and thus of major interest. Amelard et al. [2] and Gracia-Lopez et al. both show that the camera-based acquisition of JVP and detection of its fiducial points are feasible. However, future studies will have to investigate its usability.
- •
-
Rasche et al. show that iPPG can replace, to some extent, Laser Speckle Contrast Analysis (LASCA) and thus allow hemodynamic analysis with respect to the microcirculation [35].
- •
-
Monitoring of respiratory rate is common. Interestingly, current works try to reveal information about the respiration other than respiratory rate. For example, Liu et al. detail a method that aims to capture spirometric information like forced expiratory volume and peak expiratory flow [26], which has importance for general respiratory monitoring.
- •
-
Recently, Pilz et al. show the relationship of iPPG to brain waves. This interaction might gain importance for neurological monitoring [33].
- •
-
Nishidate et al. detail a method to access tissue metabolic measures (melanin, oxygenated blood, and deoxygenated blood) [32]. The approach yields information beyond pulse oximetry. By using multispectral camera systems, similar approaches might be expandable to monitor multiple substances and contribute to contactless metabolic monitoring.
Certainly, these approaches are not readily applicable. Particularly, the high number of confounding factors is likely to affect the results under real-world conditions. However, they indicate, according to Table 7.1, the high potential of iPPG for patient monitoring.
General validation: As discussed before, classical monitoring has some limitations. The consideration on accessible information underlines the high potential of cameras for monitoring purposes. Thereby, cameras can be used alone or combined with conventional sensors or other non-contact modalities. A striking example is a combination with infrared cameras, i.e., thermography [40]. By this combination, one can acquire the five most important vital parameters for monitoring without any contact. However, a couple of aspects have to be investigated before camera-based patient monitoring becomes a reality. Generally, many studies in the field focus on the principal feasibility. Elaborated tests according to guidelines or normatives are not common. Even worse, evidence on the effectiveness with the critically ill is widely lacking. Few studies considering patients, e.g., show heart-rate assessment to be possible in cardiovascular impaired patients and during arrhythmia like atrial fibrillation. However, other arrythmic events, like ventricular arrhythmia, are likely to heavily affect iPPG and probably hinder a proper analysis. Obviously, much more clinical evidence is needed before iPPG can be part of routine monitoring or replace it (in certain areas).
7.2.3 Patient monitoring by iPPG—realistic usage scenarios
iPPG features highly relevant information for clinical monitoring. It allows a much more convenient assessment, but at the expense of information content and reliability. The management of the critically ill requires highly accurate physiological measurements. In such situations, some inconvenience is preferred for more reliable and informative signals. Consequently, in our opinion, iPPG is not likely to replace current equipment to monitor standard vital parameters of the critically ill on a larger scale (e.g., if a patient is at high risk for cardiac arrhythmia, the electrocardiogram is indispensable because it provides much more information than the PPG ever could). However, there are specific clinical applications related to patient monitoring where iPPG might gain importance.
Neonatal monitoring is one example. Particularly, premature neonates are at risk of severe complications (including sudden infant death). Monitoring of heart rate and respiratory in neonates is therefore common but the available equipment is troublesome. Today's contact sensors are difficult to apply, generate stress to neonates, and can eventually cause lesions or ischemia, respectively. Various works thus have addressed non-contact monitoring of neonates and preterm infants, respectively, by various techniques. Owing to their wide possibilities, cameras are an interesting choice [1,4,7]. Besides monitoring in clinics, ambulatory neonatal monitoring might be an interesting application for the future.
Apart from neonatal monitoring, situations in intensive care are rare, where monitoring standard vital parameters is beneficial but contact-based sensors cannot be applied for objective medical reasons. However, the ease of use and higher convenience of a non-contact monitoring to capture usual vital signals should not be underestimated. Together with a ubiquitous availability of the technically easy to implement iPPG, it is conceivable to monitor the vital signs of almost every patient in a hospital. Based on current data, a significant reduction of critical incidents and even cardiac arrests in hospitals could be assumed thereof. According to an expert opinion, two of three cardiac arrests in hospitals are avoidable [17]. The most crucial points to prevent these incidents are an early detection of deterioration and a sufficient sensitivity of diagnostic tools. The delayed diagnosis of deterioration is the most common reason for a cardiac arrest, though abnormal vitals are already present hours before the event [17]. Several tools have been implemented to aid identifying patients at risk (e.g., Modified early warning score MEWS, Cardiac arrest risk triage CART; see Table 7.2 for an example). However, contemporary scores prevent barely more than every other critical incident [6]. Key to a higher success rate is the consistent recording of the score items. Respiratory rate, heart rate, and diastolic blood pressure were identified as the most predictive parameters in general-ward patients to predict cardiac arrest or the transfer to the intensive care unit [6,9]. Since (at least) the first two of these are easily derived by iPPG, an "always-on" monitoring could improve the feasibility of risk scores and eventually the safety of patients in hospitals. Further available information, e.g., derived by the analysis of facial expressions, might feature the usage of videos for risk assessment in clinical settings. A closely related area is (intermittent) monitoring in nursing homes. Today, automated monitoring is not common. However, as in hospitals, a user-friendly technique such as iPPG can be used (at least) for intermittent measurements and help to recognize deterioration early. Similarily, iPPG offers wide opportunities for home care in telemedical scenarios. Even here, a high level of availability, easy applicability, and versatile information make the use of cameras a promising approach.
Score | 3 | 2 | 1 | 0 | 1 | 2 | 3 |
---|---|---|---|---|---|---|---|
RR | >35 | 31–35 | 21–30 | 9–20 | <7 | ||
SpO2 | <85 | 85–89 | 90–92 | >92 | |||
T | >38.9 | 38–38.9 | 36–37.9 | 35–35.9 | 34–34.9 | <34 | |
SBP | >=200 | 101–199 | 81–100 | 71–80 | <=70 | ||
HR | >129 | 110–129 | 100–109 | 50–99 | 40–49 | 30–39 | <30 |
AVPU | A | V | P | U |
RR – Respiratory rate in breaths per minute, SpO2 – Arterial oxygen saturation in %
T – Body temperature in °C, SBP – Systolic blood pressure, HR – Heart rate in beats per minute
AVPU – Alert, Verbal, Unresponsive, Pain
The aforementioned applications focus on common vital signals like heart rate and respiratory rate. These specific applications impose special requirements regarding a user-friendly signal acquisition rendering iPPG's usage advantageous compared to using contact-based sensors. Clinical routine monitoring, in turn, is likely to adhere to contact-based sensors because they have a higher reliability. iPPG still might find application in routine monitoring because it features complementary information: iPPG allows a specific assessment of skin microcirculation. Skin microcirculation is known to carry prognostic information and can be used to guide therapy. Monitoring skin microcirculation is not common in today's routine monitoring. Alternative techniques like laser speckle provide information about the skin microcirculation, as well, but they have specific limitations, e.g., regarding their application. In this regard, iPPG has high potential to find application in routine monitoring and add information about the microcirculation in the future (see Sects. 7.3.2 and 7.3.3 for some more detailed information on the diagnostic potential). However, more research including prospective patient studies are required to prove iPPG's clinical value in this regard.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128222812000159
Sampling and measurement of toxic fire effluent
P. Fardell , E. Guillaume , in Fire Toxicity, 2010
11.4 Analysis of gaseous fire effluents: general principles
Clearly the ideal analytical result would be a continuous measurement of each species of interest with time, over the period of generation of the fire atmosphere. In practice, because of the restraints summarised in Section 11.1, this ideal is rarely achievable. The options for analysis depend on the species to be measured and the available instrumentation, and the latter may not necessarily be the ideal owing to, for example, economic restraints and available operator skills. It is also important to consider the end use of analytical data when deciding on the methodology to employ for specific species. For example, it may not be appropriate to choose highly sophisticated, expensive-to-operate and sensitive equipment to obtain very accurate and precise analytical data for trace compounds, when the accuracy, precision and scope of the end use application is far less. For example, the toxic effects of fire effluents on humans vary from person to person, and are based on estimates from sub-lethal human exposures or animal exposure experiments. Often relatively simple techniques may suffice, providing the limitations of the method are understood and allowed for.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781845695026500117
Source: https://www.sciencedirect.com/topics/engineering/continuous-measurement
0 Response to "Is Measuring a Length of Time Continuous Measurement"
Postar um comentário