28 resultados para High-Order Accuracy
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Tumor budding is recognized by the World Health Organization as an additional prognostic factor in colorectal cancer but remains unreported in diagnostic work due to the absence of a standardized scoring method. This study aims to assess the most prognostic and reproducible scoring systems for tumor budding in colorectal cancer. Tumor budding on pancytokeratin-stained whole tissue sections from 105 well-characterized stage II patients was scored by 3 observers using 7 methods: Hase, Nakamura, Ueno, Wang (conventional and rapid method), densest high-power field, and 10 densest high-power fields. The predictive value for clinicopathologic features, the prognostic significance, and interobserver variability of each scoring method was analyzed. Pancytokeratin staining allowed accurate evaluation of tumor buds. Interobserver agreement for 3 observers was excellent for densest high-power field (intraclass correlation coefficient, 0.83) and 10 densest high-power fields (intraclass correlation coefficient, 0.91). Agreement was moderate to substantial for the conventional Wang method (κ = 0.46-0.62) and moderate for the rapid method (κ = 0.46-0.58). For Nakamura, moderate agreement (κ = 0.41-0.52) was reached, whereas concordance was fair to moderate for Ueno (κ = 0.39-0.56) and Hase (κ = 0.29-0.51). The Hase, Ueno, densest high-power field, and 10 densest high-power field methods identified a significant association of tumor budding with tumor border configuration. In multivariate analysis, only tumor budding as evaluated in densest high-power field and 10 densest high-power fields had significant prognostic effects on patient survival (P < .01), with high prognostic accuracy over the full 10-year follow-up. Scoring tumor buds in 10 densest high-power fields is a promising method to identify stage II patients at high risk for recurrence in daily diagnostics; it is highly reproducible, accounts for heterogeneity, and has a strong predictive value for adverse outcome.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
This study deals with indoor positioning using GSM radio, which has the distinct advantage of wide coverage over other wireless technologies. In particular, we focus on passive localization systems that are able to achieve high localization accuracy without any prior knowledge of the indoor environment or the tracking device radio settings. In order to overcome these challenges, newly proposed localization algorithms based on the exploitation of the received signal strength (RSS) are proposed. We explore the effects of non-line-of-sight communication links, opening and closing of doors, and human mobility on RSS measurements and localization accuracy. We have implemented the proposed algorithms on top of software defined radio systems and carried out detailed empirical indoor experiments. The performance results show that the proposed solutions are accurate with average localization errors between 2.4 and 3.2 meters.
Resumo:
This paper analyses local geographical contexts targeted by transnational large-scale land acquisitions (>200 ha per deal) in order to understand how emerging patterns of socio-ecological characteristics can be related to processes of large-scale foreign investment in land. Using a sample of 139 land deals georeferenced with high spatial accuracy, we first analyse their target contexts in terms of land cover, population density, accessibility, and indicators for agricultural potential. Three distinct patterns emerge from the analysis: densely populated and easily accessible croplands (35% of land deals); remote forestlands with lower population densities (34% of land deals); and moderately populated and moderately accessible shrub- or grasslands (26% of land deals). These patterns are consistent with processes described in the relevant case study literature, and they each involve distinct types of stakeholders and associated competition over land. We then repeat the often-cited analysis that postulates a link between land investments and target countries with abundant so-called “idle” or “marginal” lands as measured by yield gap and available suitable but uncultivated land; our methods differ from the earlier approach, however, in that we examine local context (10-km radius) rather than countries as a whole. The results show that earlier findings are disputable in terms of concepts, methods, and contents. Further, we reflect on methodologies for exploring linkages between socioecological patterns and land investment processes. Improving and enhancing large datasets of georeferenced land deals is an important next step; at the same time, careful choice of the spatial scale of analysis is crucial for ensuring compatibility between the spatial accuracy of land deal locations and the resolution of available geospatial data layers. Finally, we argue that new approaches and methods must be developed to empirically link socio-ecological patterns in target contexts to key determinants of land investment processes. This would help to improve the validity and the reach of our findings as an input for evidence-informed policy debates.
Resumo:
Iterative Closest Point (ICP) is a widely exploited method for point registration that is based on binary point-to-point assignments, whereas the Expectation Conditional Maximization (ECM) algorithm tries to solve the problem of point registration within the framework of maximum likelihood with point-to-cluster matching. In this paper, by fulfilling the implementation of both algorithms as well as conducting experiments in a scenario where dozens of model points must be registered with thousands of observation points on a pelvis model, we investigated and compared the performance (e.g. accuracy and robustness) of both ICP and ECM for point registration in cases without noise and with Gaussian white noise. The experiment results reveal that the ECM method is much less sensitive to initialization and is able to achieve more consistent estimations of the transformation parameters than the ICP algorithm, since the latter easily sinks into local minima and leads to quite different registration results with respect to different initializations. Both algorithms can reach the high registration accuracy at the same level, however, the ICP method usually requires an appropriate initialization to converge globally. In the presence of Gaussian white noise, it is observed in experiments that ECM is less efficient but more robust than ICP.
Resumo:
One of the most intriguing phenomena in glass forming systems is the dynamic crossover (T(B)), occurring well above the glass temperature (T(g)). So far, it was estimated mainly from the linearized derivative analysis of the primary relaxation time τ(T) or viscosity η(T) experimental data, originally proposed by Stickel et al. [J. Chem. Phys. 104, 2043 (1996); J. Chem. Phys. 107, 1086 (1997)]. However, this formal procedure is based on the general validity of the Vogel-Fulcher-Tammann equation, which has been strongly questioned recently [T. Hecksher et al. Nature Phys. 4, 737 (2008); P. Lunkenheimer et al. Phys. Rev. E 81, 051504 (2010); J. C. Martinez-Garcia et al. J. Chem. Phys. 134, 024512 (2011)]. We present a qualitatively new way to identify the dynamic crossover based on the apparent enthalpy space (H(a)(') = dlnτ/d(1/T)) analysis via a new plot lnH(a)(') vs. 1∕T supported by the Savitzky-Golay filtering procedure for getting an insight into the noise-distorted high order derivatives. It is shown that depending on the ratio between the "virtual" fragility in the high temperature dynamic domain (m(high)) and the "real" fragility at T(g) (the low temperature dynamic domain, m = m(low)) glass formers can be splitted into two groups related to f < 1 and f > 1, (f = m(high)∕m(low)). The link of this phenomenon to the ratio between the apparent enthalpy and activation energy as well as the behavior of the configurational entropy is indicated.
Resumo:
This study presents a proxy-based, quantitative reconstruction of cold-season (mean October to May, TOct–May) air temperatures covering nearly the entire last millennium (AD 1060–2003, some hiatuses). The reconstruction was based on subfossil chrysophyte stomatocyst remains in the varved sediments of high-Alpine Lake Silvaplana, eastern Swiss Alps (46°27’N, 9°48′W, 1791 m a.s.l.). Previous studies have demonstrated the reliability of this proxy by comparison to meteorological data. Cold-season air temperatures could therefore be reconstructed quantitatively, at a high resolution (5-yr) and with high chronological accuracy. Spatial correlation analysis suggests that the reconstruction reflects cold season climate variability over the high- Alpine region and substantial parts of central and western Europe. Cold-season temperatures were characterized by a relatively stable first part of the millennium until AD 1440 (2σ of 5-yr mean values = 0.7 °C) and highly variable TOct–May after that (AD 1440–1900, 2σ of 5-yr mean values = 1.3 °C). Recent decades (AD, 1991-present) were unusually warm in the context of the last millennium (exceeding the 2σ-range of the mean decadal TOct–May) but this warmth was not unprecedented. The coolest decades occurred from AD 1510–1520 and AD 1880–1890. The timing of extremely warm and cold decades is generally in good agreement with documentary data representing Switzerland and central European lowlands. The transition from relatively stable to highly variable TOct–May coincided with large changes in atmospheric circulation patterns in the North Atlantic region. Comparison of reconstructed cold season temperatures to the North Atlantic Oscillation index (NAO) during the past 1000 years showed that the relatively stable and warm conditions at the study site until AD 1440 coincided with a persistent positive mode of the NAO. We propose that the transition to large TOct–May variability around AD 1440 was linked to the subsequent absence of this persistent zonal flow pattern, which would allow other climatic drivers to gain importance in the study area. From AD 1440–1900, the similarity of reconstructed TOct–May to reconstructed air pressure in the Siberian High suggests a relatively strong influence of continental anticyclonic systems on Alpine cold season climate parameters during periods when westerly airflow was subdued. A more continental type of atmospheric circulation thus seems to be characteristic for the Little Ice Age in Europe. Comparison of Toct–May to summer temperature reconstructions from the same study site shows that, as expected, summer and cold season temperature trends and variability differed completely throughout nearly the entire last 1000 years. Since AD 1980, however, summer and cold season temperatures show a simultaneous, strong increase, which is unprecedented in the context of the last millennium. We suggest that the most likely explanation for this recent trend is anthropogenic greenhouse gas (GHG) forcing.
Resumo:
We measured the elemental composition on a sample of Allende meteorite with a miniature laser ablation mass spectrometer. This Laser Mass Spectrometer (LMS) has been designed and built at the University of Bern in the Department of Space Research and Planetary Sciences with the objective of using such an instrument on a space mission. Utilising the meteorite Allende as the test sample in this study, it is demonstrated that the instrument allows the in situ determination of the elemental composition and thus mineralogy and petrology of untreated rocky samples, particularly on planetary surfaces. In total, 138 measurements of elemental compositions have been carried out on an Allende sample. The mass spectrometric data are evaluated and correlated with an optical image. It is demonstrated that by illustrating the measured elements in the form of mineralogical maps, LMS can serve as an element imaging instrument with a very high spatial resolution of µm scale. The detailed analysis also includes a mineralogical evaluation and an investigation of the volatile element content of Allende. All findings are in good agreement with published data and underline the high sensitivity, accuracy and capability of LMS as a mass analyser for space exploration.
Resumo:
Measurements of spin correlation in top quark pair production are presented using data collected with the ATLAS detector at the LHC with proton-proton collisions at a center-of-mass energy of 7 TeV, corresponding to an integrated luminosity of 4.6 fb −1 . Events are selected in final states with two charged leptons and at least two jets and in final states with one charged lepton and at least four jets. Four different observables sensitive to different properties of the top quark pair production mechanism are used to extract the correlation between the top and antitop quark spins. Some of these observables are measured for the first time. The measurements are in good agreement with the Standard Model prediction at next-to-leading-order accuracy.
Resumo:
Chrysophyte cysts are recognized as powerful proxies of cold-season temperatures. In this paper we use the relationship between chrysophyte assemblages and the number of days below 4 °C (DB4 °C) in the epilimnion of a lake in northern Poland to develop a transfer function and to reconstruct winter severity in Poland for the last millennium. DB4 °C is a climate variable related to the length of the winter. Multivariate ordination techniques were used to study the distribution of chrysophytes from sediment traps of 37 low-land lakes distributed along a variety of environmental and climatic gradients in northern Poland. Of all the environmental variables measured, stepwise variable selection and individual Redundancy analyses (RDA) identified DB4 °C as the most important variable for chrysophytes, explaining a portion of variance independent of variables related to water chemistry (conductivity, chlorides, K, sulfates), which were also important. A quantitative transfer function was created to estimate DB4 °C from sedimentary assemblages using partial least square regression (PLS). The two-component model (PLS-2) had a coefficient of determination of View the MathML sourceRcross2 = 0.58, with root mean squared error of prediction (RMSEP, based on leave-one-out) of 3.41 days. The resulting transfer function was applied to an annually-varved sediment core from Lake Żabińskie, providing a new sub-decadal quantitative reconstruction of DB4 °C with high chronological accuracy for the period AD 1000–2010. During Medieval Times (AD 1180–1440) winters were generally shorter (warmer) except for a decade with very long and severe winters around AD 1260–1270 (following the AD 1258 volcanic eruption). The 16th and 17th centuries and the beginning of the 19th century experienced very long severe winters. Comparison with other European cold-season reconstructions and atmospheric indices for this region indicates that large parts of the winter variability (reconstructed DB4 °C) is due to the interplay between the oscillations of the zonal flow controlled by the North Atlantic Oscillation (NAO) and the influence of continental anticyclonic systems (Siberian High, East Atlantic/Western Russia pattern). Differences with other European records are attributed to geographic climatological differences between Poland and Western Europe (Low Countries, Alps). Striking correspondence between the combined volcanic and solar forcing and the DB4 °C reconstruction prior to the 20th century suggests that winter climate in Poland responds mostly to natural forced variability (volcanic and solar) and the influence of unforced variability is low.
Resumo:
Indoor positioning has become an emerging research area because of huge commercial demands for location-based services in indoor environments. Channel State Information (CSI) as a fine-grained physical layer information has been recently proposed to achieve high positioning accuracy by using range-based methods, e.g., trilateration. In this work, we propose to fuse the CSI-based ranges and velocity estimated from inertial sensors by an enhanced particle filter to achieve highly accurate tracking. The algorithm relies on some enhanced ranging methods and further mitigates the remaining ranging errors by a weighting technique. Additionally, we provide an efficient method to estimate the velocity based on inertial sensors. The algorithms are designed in a network-based system, which uses rather cheap commercial devices as anchor nodes. We evaluate our system in a complex environment along three different moving paths. Our proposed tracking method can achieve 1:3m for mean accuracy and 2:2m for 90% accuracy, which is more accurate and stable than pedestrian dead reckoning and range-based positioning.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.