832 resultados para accuracy analysis
Resumo:
Firn and polar ice cores offer the only direct palaeoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD(CH4) or δ2H(CH4)) 0.5 to 1.5 kg of ice was hitherto used. Here we present a method to improve precision and reduce the sample amount for δD(CH4) measurements in (ice core) air. Pre-concentrated methane is focused in front of a high temperature oven (pre-pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post-pyrolysis trapping), both on a carbon-PLOT capillary at −196 °C. Argon, oxygen, nitrogen, carbon monoxide, unpyrolysed methane and krypton are trapped together with H2 and must be separated using a second short, cooled chromatographic column to ensure accurate results. Pre- and post-pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods, we estimate a precision of 2.3‰ for 350 g of ice (or roughly 30 mL – at standard temperature and pressure (STP) – of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.
Resumo:
A new online method to analyse water isotopes of speleothem fluid inclusions using a wavelength scanned cavity ring down spectroscopy (WS-CRDS) instrument is presented. This novel technique allows us simultaneously to measure hydrogen and oxygen isotopes for a released aliquot of water. To do so, we designed a new simple line that allows the online water extraction and isotope analysis of speleothem samples. The specificity of the method lies in the fact that fluid inclusions release is made on a standard water background, which mainly improves the δ D robustness. To saturate the line, a peristaltic pump continuously injects standard water into the line that is permanently heated to 140 °C and flushed with dry nitrogen gas. This permits instantaneous and complete vaporisation of the standard water, resulting in an artificial water background with well-known δ D and δ18O values. The speleothem sample is placed in a copper tube, attached to the line, and after system stabilisation it is crushed using a simple hydraulic device to liberate speleothem fluid inclusions water. The released water is carried by the nitrogen/standard water gas stream directly to a Picarro L1102-i for isotope determination. To test the accuracy and reproducibility of the line and to measure standard water during speleothem measurements, a syringe injection unit was added to the line. Peak evaluation is done similarly as in gas chromatography to obtain &delta D; and δ18O isotopic compositions of measured water aliquots. Precision is better than 1.5 ‰ for δ D and 0.4 ‰ for δ18O for water measurements for an extended range (−210 to 0 ‰ for δ D and −27 to 0 ‰ for δ18O) primarily dependent on the amount of water released from speleothem fluid inclusions and secondarily on the isotopic composition of the sample. The results show that WS-CRDS technology is suitable for speleothem fluid inclusion measurements and gives results that are comparable to the isotope ratio mass spectrometry (IRMS) technique.
Resumo:
In this paper, we report on an optical tolerance analysis of the submillimeter atmospheric multi-beam limb sounder, STEAMR. Physical optics and ray-tracing methods were used to quantify and separate errors in beam pointing and distortion due to reflector misalignment and primary reflector surface deformations. Simulations were performed concurrently with the manufacturing of a multi-beam demonstrator of the relay optical system which shapes and images the beams to their corresponding receiver feed horns. Results from Monte Carlo simulations show that the inserts used for reflector mounting should be positioned with an overall accuracy better than 100 μm (~ 1/10 wavelength). Analyses of primary reflector surface deformations show that a deviation of magnitude 100 μm can be tolerable before deployment, whereas the corresponding variations should be less than 30 μm during operation. The most sensitive optical elements in terms of misalignments are found near the focal plane. This localized sensitivity is attributed to the off-axis nature of the beams at this location. Post-assembly mechanical measurements of the reflectors in the demonstrator show that alignment better than 50 μm could be obtained.
Resumo:
Quantification of protein expression based on immunohistochemistry (IHC) is an important step in clinical diagnoses and translational tissue-based research. Manual scoring systems are used in order to evaluate protein expression based on staining intensities and distribution patterns. However, visual scoring remains an inherently subjective approach. The aim of our study was to explore whether digital image analysis proves to be an alternative or even superior tool to quantify expression of membrane-bound proteins. We analyzed five membrane-binding biomarkers (HER2, EGFR, pEGFR, β-catenin, and E-cadherin) and performed IHC on tumor tissue microarrays from 153 esophageal adenocarcinomas patients from a single center study. The tissue cores were scored visually applying an established routine scoring system as well as by using digital image analysis obtaining a continuous spectrum of average staining intensity. Subsequently, we compared both assessments by survival analysis as an end point. There were no significant correlations with patient survival using visual scoring of β-catenin, E-cadherin, pEGFR, or HER2. In contrast, the results for digital image analysis approach indicated that there were significant associations with disease-free survival for β-catenin, E-cadherin, pEGFR, and HER2 (P = 0.0125, P = 0.0014, P = 0.0299, and P = 0.0096, respectively). For EGFR, there was a greater association with patient survival when digital image analysis was used compared to when visual scoring was (visual: P = 0.0045, image analysis: P < 0.0001). The results of this study indicated that digital image analysis was superior to visual scoring. Digital image analysis is more sensitive and, therefore, better able to detect biological differences within the tissues with greater accuracy. This increased sensitivity improves the quality of quantification.
Resumo:
BACKGROUND The aim of this study was to evaluate imaging-based response to standardized neoadjuvant chemotherapy (NACT) regimen by dynamic contrast-enhanced magnetic resonance mammography (DCE-MRM), whereas MR images were analyzed by an automatic computer-assisted diagnosis (CAD) system in comparison to visual evaluation. MRI findings were correlated with histopathologic response to NACT and also with the occurrence of metastases in a follow-up analysis. PATIENTS AND METHODS Fifty-four patients with invasive ductal breast carcinomas received two identical MRI examinations (before and after NACT; 1.5T, contrast medium gadoteric acid). Pre-therapeutic images were compared with post-therapeutic examinations by CAD and two blinded human observers, considering morphologic and dynamic MRI parameters as well as tumor size measurements. Imaging-assessed response to NACT was compared with histopathologically verified response. All clinical, histopathologic, and DCE-MRM parameters were correlated with the occurrence of distant metastases. RESULTS Initial and post-initial dynamic parameters significantly changed between pre- and post-therapeutic DCE-MRM. Visually evaluated DCE-MRM revealed sensitivity of 85.7%, specificity of 91.7%, and diagnostic accuracy of 87.0% in evaluating the response to NACT compared to histopathology. CAD analysis led to more false-negative findings (37.0%) compared to visual evaluation (11.1%), resulting in sensitivity of 52.4%, specificity of 100.0%, and diagnostic accuracy of 63.0%. The following dynamic MRI parameters showed significant associations to occurring metastases: Post-initial curve type before NACT (entire lesions, calculated by CAD) and post-initial curve type of the most enhancing tumor parts after NACT (calculated by CAD and manually). CONCLUSIONS In the accurate evaluation of response to neoadjuvant treatment, CAD systems can provide useful additional information due to the high specificity; however, they cannot replace visual imaging evaluation. Besides traditional prognostic factors, contrast medium-induced dynamic MRI parameters reveal significant associations to patient outcome, i.e. occurrence of distant metastases.
Resumo:
Urban agriculture is a phenomenon that can be observed world-wide, particularly in cities of devel-oping countries. It is contributing significantly to food security and food safety and has sustained livelihood of the urban and peri-urban low income dwellers in developing countries for many years. Population increase due to rural-urban migration and natural, coupled with formal as well as infor-mal urbanization are competing with urban farming for available space and scarce water resources. A multitemporal multisensoral urban change analysis over the period of 25 years (1982-2007) was performed in order to measure and visualize the urban expansion along the Kizinga and Mzinga valley in the South of Dar es Salaam. Airphotos and VHR satellite data were analyzed by using a combination of a composition of anisotropic textural measures and spectral information. The study revealed that unplanned built-up area is expanding continuously and vegetation covers and agricultural lands decline at a fast rate. The validation showed that the overall classification accuracy varied depending on the database. The extracted built-up areas were used for visual in-terpretation mapping purposes and served as information source for another research project. The maps visualize an urban congestion and expansion of nearly 18% of the total analyzed area that had taken place in the Kizinga valley between 1982 and 2007. The same development can be ob-served in the less developed and more remote Mzinga valley between 1981 and 2002. Both areas underwent fast changes where land prices still tend to go up and an influx of people both from rural and urban areas continuously increase density with the consequence of increasing multiple land use interests.
Resumo:
An in-depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of observations to obtain the most accurate orbit propagation. The accuracy of the results of an orbit determination/ improvement process depends on: tracklet length, number of observations, type of orbit, astrometric error, time interval between tracklets and observation geometry. The latter depends on the position of the object along its orbit and the location of the observing station. This covariance analysis aims to optimize the observation strategy taking into account the influence of the orbit shape, of the relative object-observer geometry and the interval between observations.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for space debris using optical sensors. The debris objects are discovered during systematic survey observations. In general, the result of a discovery consists in only a short observation arc, or tracklet, which is used to perform a first orbit determination in order to be able to observe t he object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In order to obtain the most accurate orbit within the time available it is necessary to optimize the follow-up observations strategy. In this paper an in‐depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of follow-up observations to obtain the most accurate orbit propagation to be used for the space debris catalogue maintenance. The main factors that determine the accuracy of the results of an orbit determination/improvement process are: tracklet length, number of observations, type of orbit, astrometric error of the measurements, time interval between tracklets, and the relative position of the object along its orbit with respect to the observing station. The main aim of the covariance analysis is to optimize the follow-up strategy as a function of the object-observer geometry, the interval between follow-up observations and the shape of the orbit. This an alysis can be applied to every orbital regime but particular attention was dedicated to geostationary, Molniya, and geostationary transfer orbits. Finally the case with more than two follow-up observations and the influence of a second observing station are also analyzed.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observation is depending on the considered orbit.
Resumo:
Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA’s Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), led by the Astronomical Institute of the University of Bern (AIUB), addresses this problem. The goal of the project is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). The In-Orbit Tumbling Analysis tool (ιOTA) is a prototype software, currently in development by Hyperschall Technologie Göttingen GmbH (HTG) within the framework of the project. ιOTA will be a highly modular software tool to perform short-(days), medium-(months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour, magnetic torquer activity and thruster firing. The purpose of ιOTA is to provide high accuracy short-term simulations to support observers and potential ADR missions, as well as medium-and long-term simulations to study the significance of the particular internal and external influences on the attitude, especially damping factors and momentum transfer. The simulation will also enable the investigation of the altitude dependency of the particular external influences. ιOTA's post-processing modules will generate synthetic measurements for observers and for software validation. The validation of the software will be done by cross-calibration with observations and measurements acquired by the project partners.
Resumo:
BACKGROUND & AIMS It is not clear whether symptoms alone can be used to estimate the biologic activity of eosinophilic esophagitis (EoE). We aimed to evaluate whether symptoms can be used to identify patients with endoscopic and histologic features of remission. METHODS Between April 2011 and June 2014, we performed a prospective, observational study and recruited 269 consecutive adults with EoE (67% male; median age, 39 years old) in Switzerland and the United States. Patients first completed the validated symptom-based EoE activity index patient-reported outcome instrument and then underwent esophagogastroduodenoscopy with esophageal biopsy collection. Endoscopic and histologic findings were evaluated with a validated grading system and standardized instrument, respectively. Clinical remission was defined as symptom score <20 (range, 0-100); histologic remission was defined as a peak count of <20 eosinophils/mm(2) in a high-power field (corresponds to approximately <5 eosinophils/median high-power field); and endoscopic remission as absence of white exudates, moderate or severe rings, strictures, or combination of furrows and edema. We used receiver operating characteristic analysis to determine the best symptom score cutoff values for detection of remission. RESULTS Of the study subjects, 111 were in clinical remission (41.3%), 79 were in endoscopic remission (29.7%), and 75 were in histologic remission (27.9%). When the symptom score was used as a continuous variable, patients in endoscopic, histologic, and combined (endoscopic and histologic remission) remission were detected with area under the curve values of 0.67, 0.60, and 0.67, respectively. A symptom score of 20 identified patients in endoscopic remission with 65.1% accuracy and histologic remission with 62.1% accuracy; a symptom score of 15 identified patients with both types of remission with 67.7% accuracy. CONCLUSIONS In patients with EoE, endoscopic or histologic remission can be identified with only modest accuracy based on symptoms alone. At any given time, physicians cannot rely on lack of symptoms to make assumptions about lack of biologic disease activity in adults with EoE. ClinicalTrials.gov, Number: NCT00939263.
Resumo:
This paper examines how the geospatial accuracy of samples and sample size influence conclusions from geospatial analyses. It does so using the example of a study investigating the global phenomenon of large-scale land acquisitions and the socio-ecological characteristics of the areas they target. First, we analysed land deal datasets of varying geospatial accuracy and varying sizes and compared the results in terms of land cover, population density, and two indicators for agricultural potential: yield gap and availability of uncultivated land that is suitable for rainfed agriculture. We found that an increase in geospatial accuracy led to a substantial and greater change in conclusions about the land cover types targeted than an increase in sample size, suggesting that using a sample of higher geospatial accuracy does more to improve results than using a larger sample. The same finding emerged for population density, yield gap, and the availability of uncultivated land suitable for rainfed agriculture. Furthermore, the statistical median proved to be more consistent than the mean when comparing the descriptive statistics for datasets of different geospatial accuracy. Second, we analysed effects of geospatial accuracy on estimations regarding the potential for advancing agricultural development in target contexts. Our results show that the target contexts of the majority of land deals in our sample whose geolocation is known with a high level of accuracy contain smaller amounts of suitable, but uncultivated land than regional- and national-scale averages suggest. Consequently, the more target contexts vary within a country, the more detailed the spatial scale of analysis has to be in order to draw meaningful conclusions about the phenomena under investigation. We therefore advise against using national-scale statistics to approximate or characterize phenomena that have a local-scale impact, particularly if key indicators vary widely within a country.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
AIM Depending on intensity, exercise may induce a strong hormonal and metabolic response, including acid-base imbalances and changes in microcirculation, potentially interfering with the accuracy of continuous glucose monitoring (CGM). The present study aimed at comparing the accuracy of the Dexcom G4 Platinum (DG4P) CGM during continuous moderate and intermittent high-intensity exercise (IHE) in adults with type 1 diabetes (T1DM). METHODS Ten male individuals with well-controlled T1DM (HbA1c 7.0±0.6% [54±6mmol/mol]) inserted the DG4P sensor 2 days prior to a 90min cycling session (50% VO2peak) either with (IHE) or without (CONT) a 10s all-out sprint every 10min. Venous blood samples for reference glucose measurement were drawn every 10min and euglycemia (target 7mmol/l) was maintained using an oral glucose solution. Additionally, lactate and venous blood gas variables were determined. RESULTS Mean reference blood glucose was 7.6±0.2mmol/l during IHE and 6.7±0.2mmol/l during CONT (p<0.001). IHE resulted in significantly higher levels of lactate (7.3±0.5mmol/l vs. 2.6±0.3mmol/l, p<0.001), while pH values were significantly lower in the IHE group (7.27 vs. 7.38, p=0.001). Mean absolute relative difference (MARD) was 13.3±2.2% for IHE and 13.6±2.8% for CONT suggesting comparable accuracy (p=0.90). Using Clarke Error Grid Analysis, 100% of CGM values during both IHE and CONT were in zones A and B (IHE: 77% and 23%; CONT: 78% and 22%). CONCLUSIONS The present study revealed good and comparable accuracy of the DG4P CGM system during intermittent high intensity and continuous moderate intensity exercise, despite marked differences in metabolic conditions. This corroborates the clinical robustness of CGM under differing exercise conditions. CLINICAL TRIAL REGISTRATION NUMBER ClinicalTrials.gov NCT02068638.
Resumo:
Immunoassays are essential in the workup of patients with suspected heparin-induced thrombocytopenia. However, the diagnostic accuracy is uncertain with regard to different classes of assays, antibody specificities, thresholds, test variations, and manufacturers. We aimed to assess diagnostic accuracy measures of available immunoassays and to explore sources of heterogeneity. We performed comprehensive literature searches and applied strict inclusion criteria. Finally, 49 publications comprising 128 test evaluations in 15 199 patients were included in the analysis. Methodological quality according to the revised tool for quality assessment of diagnostic accuracy studies was moderate. Diagnostic accuracy measures were calculated with the unified model (comprising a bivariate random-effects model and a hierarchical summary receiver operating characteristics model). Important differences were observed between classes of immunoassays, type of antibody specificity, thresholds, application of confirmation step, and manufacturers. Combination of high sensitivity (>95%) and high specificity (>90%) was found in 5 tests only: polyspecific enzyme-linked immunosorbent assay (ELISA) with intermediate threshold (Genetic Testing Institute, Asserachrom), particle gel immunoassay, lateral flow immunoassay, polyspecific chemiluminescent immunoassay (CLIA) with a high threshold, and immunoglobulin G (IgG)-specific CLIA with low threshold. Borderline results (sensitivity, 99.6%; specificity, 89.9%) were observed for IgG-specific Genetic Testing Institute-ELISA with low threshold. Diagnostic accuracy appears to be inadequate in tests with high thresholds (ELISA; IgG-specific CLIA), combination of IgG specificity and intermediate thresholds (ELISA, CLIA), high-dose heparin confirmation step (ELISA), and particle immunofiltration assay. When making treatment decisions, clinicians should be a aware of diagnostic characteristics of the tests used and it is recommended they estimate posttest probabilities according to likelihood ratios as well as pretest probabilities using clinical scoring tools.