917 resultados para Statistical Robustness
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2014
Resumo:
Magdeburg, Univ., Fak. für Maschinenbau, Diss., 2015
Resumo:
v.39:no.3(1978)
Resumo:
This paper deals about the nematocysts like a source of biometric information for comparison between the species Hydra vulgaris Pallas, 1766, Hydra vulgaris pedunculata Deserti et al., 2011 and Hydra pseudoligactis (Hyman, 1931). This biometric tool lets us carry out statistical comparisons and adding these results to the identification of specimens from different classificatory groups. In this particular study, we obtained significant differences between species, individuals of each species and nematocysts type when compared the biometry of its nematocysts. Another result was the variation in of particular nematocysts, like atrichous isorhiza and holotrichous isorhiza for the species H. vulgaris in relation to the column size.
Resumo:
BACKGROUND: Cone-beam computed tomography (CBCT) image-guided radiotherapy (IGRT) systems are widely used tools to verify and correct the target position before each fraction, allowing to maximize treatment accuracy and precision. In this study, we evaluate automatic three-dimensional intensity-based rigid registration (RR) methods for prostate setup correction using CBCT scans and study the impact of rectal distension on registration quality. METHODS: We retrospectively analyzed 115 CBCT scans of 10 prostate patients. CT-to-CBCT registration was performed using (a) global RR, (b) bony RR, or (c) bony RR refined by a local prostate RR using the CT clinical target volume (CTV) expanded with 1-to-20-mm varying margins. After propagation of the manual CT contours, automatic CBCT contours were generated. For evaluation, a radiation oncologist manually delineated the CTV on the CBCT scans. The propagated and manual CBCT contours were compared using the Dice similarity and a measure based on the bidirectional local distance (BLD). We also conducted a blind visual assessment of the quality of the propagated segmentations. Moreover, we automatically quantified rectal distension between the CT and CBCT scans without using the manual CBCT contours and we investigated its correlation with the registration failures. To improve the registration quality, the air in the rectum was replaced with soft tissue using a filter. The results with and without filtering were compared. RESULTS: The statistical analysis of the Dice coefficients and the BLD values resulted in highly significant differences (p<10(-6)) for the 5-mm and 8-mm local RRs vs the global, bony and 1-mm local RRs. The 8-mm local RR provided the best compromise between accuracy and robustness (Dice median of 0.814 and 97% of success with filtering the air in the rectum). We observed that all failures were due to high rectal distension. Moreover, the visual assessment confirmed the superiority of the 8-mm local RR over the bony RR. CONCLUSION: The most successful CT-to-CBCT RR method proved to be the 8-mm local RR. We have shown the correlation between its registration failures and rectal distension. Furthermore, we have provided a simple (easily applicable in routine) and automatic method to quantify rectal distension and to predict registration failure using only the manual CT contours.
Resumo:
Robust decision making implies welfare costs or robustness premia when the approximating model is the true data generating process. To examine the importance of these premia at the aggregate level we employ a simple two-sector dynamic general equilibrium model with human capital and introduce an additional form of precautionary behavior. The latter arises from the robust decision maker s ability to reduce the effects of model misspecification through allocating time and existing human capital to this end. We find that the extent of the robustness premia critically depends on the productivity of time relative to that of human capital. When the relative efficiency of time is low, despite transitory welfare costs, there are gains from following robust policies in the long-run. In contrast, high relative productivity of time implies misallocation costs that remain even in the long-run. Finally, depending on the technology used to reduce model uncertainty, we fi nd that while increasing the fear of model misspecfi cation leads to a net increase in precautionary behavior, investment and output can fall.
Resumo:
In this paper we propose a novel empirical extension of the standard market microstructure order flow model. The main idea is that heterogeneity of beliefs in the foreign exchange market can cause model instability and such instability has not been fully accounted for in the existing empirical literature. We investigate this issue using two di¤erent data sets and focusing on out- of-sample forecasts. Forecasting power is measured using standard statistical tests and, additionally, using an alternative approach based on measuring the economic value of forecasts after building a portfolio of assets. We nd there is a substantial economic value on conditioning on the proposed models.
Resumo:
‘Modern’ Phillips curve theories predict inflation is an integrated, or near integrated, process. However, inflation appears bounded above and below in developed economies and so cannot be ‘truly’ integrated and more likely stationary around a shifting mean. If agents believe inflation is integrated as in the ‘modern’ theories then they are making systematic errors concerning the statistical process of inflation. An alternative theory of the Phillips curve is developed that is consistent with the ‘true’ statistical process of inflation. It is demonstrated that United States inflation data is consistent with the alternative theory but not with the existing ‘modern’ theories.
Resumo:
'Modern' theories of the Phillips curve imply that inflation is an integrated, or near integrated process. This paper explains this implication and why these 'modern' theories are logically inconsistent with what is commonly known about the statistical process of inflation.
Resumo:
The usual way to investigate the statistical properties of finitely generated subgroups of free groups, and of finite presentations of groups, is based on the so-called word-based distribution: subgroups are generated (finite presentations are determined) by randomly chosen k-tuples of reduced words, whose maximal length is allowed to tend to infinity. In this paper we adopt a different, though equally natural point of view: we investigate the statistical properties of the same objects, but with respect to the so-called graph-based distribution, recently introduced by Bassino, Nicaud and Weil. Here, subgroups (and finite presentations) are determined by randomly chosen Stallings graphs whose number of vertices tends to infinity. Our results show that these two distributions behave quite differently from each other, shedding a new light on which properties of finitely generated subgroups can be considered frequent or rare. For example, we show that malnormal subgroups of a free group are negligible in the raph-based distribution, while they are exponentially generic in the word-based distribution. Quite surprisingly, a random finite presentation generically presents the trivial group in this new distribution, while in the classical one it is known to generically present an infinite hyperbolic group.
Resumo:
Limited information is available regarding the methodology required to characterize hashish seizures for assessing the presence or the absence of a chemical link between two seizures. This casework report presents the methodology applied for assessing that two different police seizures were coming from the same block before this latter one was split. The chemical signature was extracted using GC-MS analysis and the implemented methodology consists in a study of intra- and inter-variability distributions based on the measurement of the chemical profiles similarity using a number of hashish seizures and the calculation of the Pearson correlation coefficient. Different statistical scenarios (i.e., a combination of data pretreatment techniques and selection of target compounds) were tested to find the most discriminating one. Seven compounds showing high discrimination capabilities were selected on which a specific statistical data pretreatment was applied. Based on the results, the statistical model built for comparing the hashish seizures leads to low error rates. Therefore, the implemented methodology is suitable for the chemical profiling of hashish seizures.
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.