21 resultados para Point interpolation method
em BORIS: Bern Open Repository and Information System - Berna - Sui
Resumo:
Stable oxygen isotope composition of atmospheric precipitation (δ18Op) was scrutinized from 39 stations distributed over Switzerland and its border zone. Monthly amount-weighted δ18Op values averaged over the 1995–2000 period showed the expected strong linear altitude dependence (−0.15 to −0.22‰ per 100 m) only during the summer season (May–September). Steeper gradients (~ −0.56 to −0.60‰ per 100 m) were observed for winter months over a low elevation belt, while hardly any altitudinal difference was seen for high elevation stations. This dichotomous pattern could be explained by the characteristically shallower vertical atmospheric mixing height during winter season and provides empirical evidence for recently simulated effects of stratified atmospheric flow on orographic precipitation isotopic ratios. This helps explain "anomalous" deflected altitudinal water isotope profiles reported from many other high relief regions. Grids and isotope distribution maps of the monthly δ18Op have been calculated over the study region for 1995–1996. The adopted interpolation method took into account both the variable mixing heights and the seasonal difference in the isotopic lapse rate and combined them with residual kriging. The presented data set allows a point estimation of δ18Op with monthly resolution. According to the test calculations executed on subsets, this biannual data set can be extended back to 1992 with maintained fidelity and, with a reduced station subset, even back to 1983 at the expense of faded reliability of the derived δ18Op estimates, mainly in the eastern part of Switzerland. Before 1983, reliable results can only be expected for the Swiss Plateau since important stations representing eastern and south-western Switzerland were not yet in operation.
Resumo:
BACKGROUND: Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. METHODS: The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). RESULTS: The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. CONCLUSION: These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.
Resumo:
Temperate C3-grasslands are of high agricultural and ecological importance in Central Europe. Plant growth and consequently grassland yields depend strongly on water supply during the growing season, which is projected to change in the future. We therefore investigated the effect of summer drought on the water uptake of an intensively managed lowland and an extensively managed sub-alpine grassland in Switzerland. Summer drought was simulated by using transparent shelters. Standing above- and belowground biomass was sampled during three growing seasons. Soil and plant xylem waters were analyzed for oxygen (and hydrogen) stable isotope ratios, and the depths of plant water uptake were estimated by two different approaches: (1) linear interpolation method and (2) Bayesian calibrated mixing model. Relative to the control, aboveground biomass was reduced under drought conditions. In contrast to our expectations, lowland grassland plants subjected to summer drought were more likely (43–68 %) to rely on water in the topsoil (0–10 cm), whereas control plants relied less on the topsoil (4–37 %) and shifted to deeper soil layers (20–35 cm) during the drought period (29–48 %). Sub-alpine grassland plants did not differ significantly in uptake depth between drought and control plots during the drought period. Both approaches yielded similar results and showed that the drought treatment in the two grasslands did not induce a shift to deeper uptake depths, but rather continued or shifted water uptake to even more shallower soil depths. These findings illustrate the importance of shallow soil depths for plant performance under drought conditions.
Resumo:
We obtain upper bounds for the total variation distance between the distributions of two Gibbs point processes in a very general setting. Applications are provided to various well-known processes and settings from spatial statistics and statistical physics, including the comparison of two Lennard-Jones processes, hard core approximation of an area interaction process and the approximation of lattice processes by a continuous Gibbs process. Our proof of the main results is based on Stein's method. We construct an explicit coupling between two spatial birth-death processes to obtain Stein factors, and employ the Georgii-Nguyen-Zessin equation for the total bound.
Resumo:
It is well known that the early initiation of a specific antiinfective therapy is crucial to reduce the mortality in severe infection. Procedures culturing pathogens are the diagnostic gold standard in such diseases. However, these methods yield results earliest between 24 to 48 hours. Therefore, severe infections such as sepsis need to be treated with an empirical antimicrobial therapy, which is ineffective in an unknown fraction of these patients. Today's microbiological point of care tests are pathogen specific and therefore not appropriate for an infection with a variety of possible pathogens. Molecular nucleic acid diagnostics such as polymerase chain reaction (PCR) allow the identification of pathogens and resistances. These methods are used routinely to speed up the analysis of positive blood cultures. The newest PCR based system allows the identification of the 25 most frequent sepsis pathogens by PCR in parallel without previous culture in less than 6 hours. Thereby, these systems might shorten the time of possibly insufficient antiinfective therapy. However, these extensive tools are not suitable as point of care diagnostics. Miniaturization and automating of the nucleic acid based method is pending, as well as an increase of detectable pathogens and resistance genes by these methods. It is assumed that molecular PCR techniques will have an increasing impact on microbiological diagnostics in the future.
Resumo:
By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Iterative Closest Point (ICP) is a widely exploited method for point registration that is based on binary point-to-point assignments, whereas the Expectation Conditional Maximization (ECM) algorithm tries to solve the problem of point registration within the framework of maximum likelihood with point-to-cluster matching. In this paper, by fulfilling the implementation of both algorithms as well as conducting experiments in a scenario where dozens of model points must be registered with thousands of observation points on a pelvis model, we investigated and compared the performance (e.g. accuracy and robustness) of both ICP and ECM for point registration in cases without noise and with Gaussian white noise. The experiment results reveal that the ECM method is much less sensitive to initialization and is able to achieve more consistent estimations of the transformation parameters than the ICP algorithm, since the latter easily sinks into local minima and leads to quite different registration results with respect to different initializations. Both algorithms can reach the high registration accuracy at the same level, however, the ICP method usually requires an appropriate initialization to converge globally. In the presence of Gaussian white noise, it is observed in experiments that ECM is less efficient but more robust than ICP.
Multicentre evaluation of a new point-of-care test for the determination of NT-proBNP in whole blood
Resumo:
BACKGROUND: The Roche CARDIAC proBNP point-of-care (POC) test is the first test intended for the quantitative determination of N-terminal pro-brain natriuretic peptide (NT-proBNP) in whole blood as an aid in the diagnosis of suspected congestive heart failure, in the monitoring of patients with compensated left-ventricular dysfunction and in the risk stratification of patients with acute coronary syndromes. METHODS: A multicentre evaluation was carried out to assess the analytical performance of the POC NT-proBNP test at seven different sites. RESULTS: The majority of all coefficients of variation (CVs) obtained for within-series imprecision using native blood samples was below 10% for both 52 samples measured ten times and for 674 samples measured in duplicate. Using quality control material, the majority of CV values for day-to-day imprecision were below 14% for the low control level and below 13% for the high control level. In method comparisons for four lots of the POC NT-proBNP test with the laboratory reference method (Elecsys proBNP), the slope ranged from 0.93 to 1.10 and the intercept ranged from 1.8 to 6.9. The bias found between venous and arterial blood with the POC NT-proBNP method was < or =5%. All four lots of the POC NT-proBNP test investigated showed excellent agreement, with mean differences of between -5% and +4%. No significant interference was observed with lipaemic blood (triglyceride concentrations up to 6.3 mmol/L), icteric blood (bilirubin concentrations up to 582 micromol/L), haemolytic blood (haemoglobin concentrations up to 62 mg/L), biotin (up to 10 mg/L), rheumatoid factor (up to 42 IU/mL), or with 50 out of 52 standard or cardiological drugs in therapeutic concentrations. With bisoprolol and BNP, somewhat higher bias in the low NT-proBNP concentration range (<175 ng/L) was found. Haematocrit values between 28% and 58% had no influence on the test result. Interference may be caused by human anti-mouse antibodies (HAMA) types 1 and 2. No significant influence on the results with POC NT-proBNP was found using volumes of 140-165 muL. High NT-proBNP concentrations above the measuring range of the POC NT-proBNP test did not lead to false low results due to a potential high-dose hook effect. CONCLUSIONS: The POC NT-proBNP test showed good analytical performance and excellent agreement with the laboratory method. The POC NT-proBNP assay is therefore suitable in the POC setting.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
PURPOSE: To correlate the dimension of the visual field (VF) tested by Goldman kinetic perimetry with the extent of visibility of the highly reflective layer between inner and outer segments of photoreceptors (IOS) seen in optical coherence tomography (OCT) images in patients with retinitis pigmentosa (RP). METHODS: In a retrospectively designed cross-sectional study, 18 eyes of 18 patients with RP were examined with OCT and Goldmann perimetry using test target I4e and compared with 18 eyes of 18 control subjects. A-scans of raw scan data of Stratus OCT images (Carl Zeiss Meditec, AG, Oberkochen, Germany) were quantitatively analyzed for the presence of the signal generated by the highly reflective layer between the IOS in OCT images. Starting in the fovea, the distance to which this signal was detectable was measured. Visual fields were analyzed by measuring the distance from the center point to isopter I4e. OCT and visual field data were analyzed in a clockwise fashion every 30 degrees , and corresponding measures were correlated. RESULTS: In corresponding alignments, the distance from the center point to isopter I4e and the distance to which the highly reflective signal from the IOS can be detected correlate significantly (r = 0.75, P < 0.0001). The greater the distance in VF, the greater the distance measured in OCT. CONCLUSIONS: The authors hypothesize that the retinal structure from which the highly reflective layer between the IOS emanates is of critical importance for visual and photoreceptor function. Further research is warranted to determine whether this may be useful as an objective marker of progression of retinal degeneration in patients with RP.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.