25 resultados para Error correction methods
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.
Resumo:
Cell competition is the short-range elimination of slow-dividing cells through apoptosis when confronted with a faster growing population. It is based on the comparison of relative cell fitness between neighboring cells and is a striking example of tissue adaptability that could play a central role in developmental error correction and cancer progression in both Drosophila melanogaster and mammals. Cell competition has led to the discovery of multiple pathways that affect cell fitness and drive cell elimination. The diversity of these pathways could reflect unrelated phenomena, yet recent evidence suggests some common wiring and the existence of a bona fide fitness comparison pathway.
Resumo:
The isotope composition of selenium (Se) can provide important constraints on biological, geochemical, and cosmochemical processes taking place in different reservoirs on Earth and during planet formation. To provide precise qualitative and quantitative information on these processes, accurate and highly precise isotope data need to be obtained. The currently applied ICP-MS methods for Se isotope measurements are compromised by the necessity to perform a large number of interference corrections. Differences in these correction methods can lead to discrepancies in published Se isotope values of rock standards which are significantly higher than the acclaimed precision. An independent analytical approach applying a double spike (DS) and state-of-the-art TIMS may yield better precision due to its smaller number of interferences and could test the accuracy of data obtained by ICP-MS approaches. This study shows that the precision of Se isotope measurements performed with two different Thermo Scientific™ Triton™ Plus TIMS is distinctly deteriorated by about ±1‰ (2 s.d.) due to δ80/78Se by a memory Se signal of up to several millivolts and additional minor residual mass bias which could not be corrected for with the common isotope fractionation laws. This memory Se has a variable isotope composition with a DS fraction of up to 20% and accumulates with increasing number of measurements. Thus it represents an accumulation of Se from previous Se measurements with a potential addition from a sample or machine blank. Several cleaning techniques of the MS parts were tried to decrease the memory signal, but were not sufficient to perform precise Se isotope analysis. If these serious memory problems can be overcome in the future, the precision and accuracy of Se isotope analysis with TIMS should be significantly better than those of the current ICP-MS approaches.
Resumo:
Background Leg edema is a common manifestation of various underlying pathologies. Reliable measurement tools are required to quantify edema and monitor therapeutic interventions. Aim of the present work was to investigate the reproducibility of optoelectronic leg volumetry over 3 weeks' time period and to eliminate daytime related within-individual variability. Methods Optoelectronic leg volumetry was performed in 63 hairdressers (mean age 45 ± 16 years, 85.7% female) in standing position twice within a minute for each leg and repeated after 3 weeks. Both lower leg (legBD) and whole limb (limbBF) volumetry were analysed. Reproducibility was expressed as analytical and within-individual coefficients of variance (CVA, CVW), and as intra-class correlation coefficients (ICC). Results A total of 492 leg volume measurements were analysed. Both legBD and limbBF volumetry were highly reproducible with CVA of 0.5% and 0.7%, respectively. Within-individual reproducibility of legBD and limbBF volumetry over a three weeks' period was high (CVW 1.3% for both; ICC 0.99 for both). At both visits, the second measurement revealed a significantly higher volume compared to the first measurement with a mean increase of 7.3 ml ± 14.1 (0.33% ± 0.58%) for legBD and 30.1 ml ± 48.5 ml (0.52% ± 0.79%) for limbBF volume. A significant linear correlation between absolute and relative leg volume differences and the difference of exact day time of measurement between the two study visits was found (P < .001). A therefore determined time-correction formula permitted further improvement of CVW. Conclusions Leg volume changes can be reliably assessed by optoelectronic leg volumetry at a single time point and over a 3 weeks' time period. However, volumetry results are biased by orthostatic and daytime-related volume changes. The bias for day-time related volume changes can be minimized by a time-correction formula.
Resumo:
For the development of meniscal substitutes and related finite element models it is necessary to know the mechanical properties of the meniscus and its attachments. Measurement errors can falsify the determination of material properties. Therefore the impact of metrological and geometrical measurement errors on the determination of the linear modulus of human meniscal attachments was investigated. After total differentiation the error of the force (+0.10%), attachment deformation (−0.16%), and fibre length (+0.11%) measurements almost annulled each other. The error of the cross-sectional area determination ranged from 0.00%, gathered from histological slides, up to 14.22%, obtained from digital calliper measurements. Hence, total measurement error ranged from +0.05% to −14.17%, predominantly affected by the cross-sectional area determination error. Further investigations revealed that the entire cross-section was significantly larger compared to the load-carrying collagen fibre area. This overestimation of the cross-section area led to an underestimation of the linear modulus of up to −36.7%. Additionally, the cross-sections of the collagen-fibre area of the attachments significantly varied up to +90% along their longitudinal axis. The resultant ratio between the collagen fibre area and the histologically determined cross-sectional area ranged between 0.61 for the posterolateral and 0.69 for the posteromedial ligament. The linear modulus of human meniscal attachments can be significantly underestimated due to the use of different methods and locations of cross-sectional area determination. Hence, it is suggested to assess the load carrying collagen fibre area histologically, or, alternatively, to use the correction factors proposed in this study.
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.
Resumo:
BACKGROUND: Physiological data obtained with the pulmonary artery catheter (PAC) are susceptible to errors in measurement and interpretation. Little attention has been paid to the relevance of errors in hemodynamic measurements performed in the intensive care unit (ICU). The aim of this study was to assess the errors related to the technical aspects (zeroing and reference level) and actual measurement (curve interpretation) of the pulmonary artery occlusion pressure (PAOP). METHODS: Forty-seven participants in a special ICU training program and 22 ICU nurses were tested without pre-announcement. All participants had previously been exposed to the clinical use of the method. The first task was to set up a pressure measurement system for PAC (zeroing and reference level) and the second to measure the PAOP. RESULTS: The median difference from the reference mid-axillary zero level was - 3 cm (-8 to + 9 cm) for physicians and -1 cm (-5 to + 1 cm) for nurses. The median difference from the reference PAOP was 0 mmHg (-3 to 5 mmHg) for physicians and 1 mmHg (-1 to 15 mmHg) for nurses. When PAOP values were adjusted for the differences from the reference transducer level, the median differences from the reference PAOP values were 2 mmHg (-6 to 9 mmHg) for physicians and 2 mmHg (-6 to 16 mmHg) for nurses. CONCLUSIONS: Measurement of the PAOP is susceptible to substantial error as a result of practical mistakes. Comparison of results between ICUs or practitioners is therefore not possible.
Resumo:
BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
QUESTION UNDER STUDY To establish at what stage Swiss hospitals are in implementing an internal standard concerning communication with patients and families after an error that resulted in harm. METHODS Hospitals were identified via the Swiss Hospital Association's website. An anonymous questionnaire was sent during September and October 2011 to 379 hospitals in German, French or Italian. Hospitals were asked to specify their hospital type and the implementation status of an internal hospital standard that decrees that patients or their relatives are to be promptly informed about medical errors that result in harm. RESULTS Responses from a total of 205 hospitals were received, a response rate of 54%. Most responding hospitals (62%) had an error disclosure standard or planned to implement one within 12 months. The majority of responding university and acute care (75%) hospitals had introduced a disclosure standard or were planning to do so. In contrast, the majority of responding psychiatric, rehabilitation and specialty (53%) clinics had not introduced a standard. CONCLUSION It appears that Swiss hospitals are in a promising state in providing institutional support for practitioners disclosing medical errors to patients. This has been shown internationally to be one important factor in encouraging the disclosure of medical errors. However, many hospitals, in particular psychiatric, rehabilitation and specialty clinics, have not implemented an error disclosure policy. Further research is needed to explore the underlying reasons.
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.