88 resultados para robust atomic distributed amorphous


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lexical diversity measures are notoriously sensitive to variations of sample size and recent approaches to this issue typically involve the computation of the average variety of lexical units in random subsamples of fixed size. This methodology has been further extended to measures of inflectional diversity such as the average number of wordforms per lexeme, also known as the mean size of paradigm (MSP) index. In this contribution we argue that, while random sampling can indeed be used to increase the robustness of inflectional diversity measures, using a fixed subsample size is only justified under the hypothesis that the corpora that we compare have the same degree of lexematic diversity. In the more general case where they may have differing degrees of lexematic diversity, a more sophisticated strategy can and should be adopted. A novel approach to the measurement of inflectional diversity is proposed, aiming to cope not only with variations of sample size, but also with variations of lexematic diversity. The robustness of this new method is empirically assessed and the results show that while there is still room for improvement, the proposed methodology considerably attenuates the impact of lexematic diversity discrepancies on the measurement of inflectional diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Gamma Knife surgery (GKS) is a noninvasive neurosurgical stereotactic procedure, increasingly used as an alternative to open functional procedures. This includes the targeting of the ventrointermediate nucleus of the thalamus (e.g., Vim) for tremor. Objective: To enhance anatomic imaging for Vim GKS using high-field (7 T) MRI and Diffusion Weighted Imaging (DWI). Methods: Five young healthy subjects and two patients were scanned both on 3 and 7 T MRI. The protocol was the same in all cases, and included: T1-weighted (T1w) and DWI at 3T; susceptibility weighted images (SWI) at 7T for the visualization of thalamic subparts. SWI was further integrated into the Gamma Plan Software® (LGP, Elekta Instruments, AB, Sweden) and co-registered with 3T images. A simulation of targeting of the Vim was done using the quadrilatere of Guyot. Furthermore, a correlation with the position of the found target on SWI and also on DWI (after clustering of the different thalamic nuclei) was performed. Results: For the 5 healthy subjects, there was a good correlation between the position of the Vim on SWI, DWI and the GKS targeting. For the patients, on the pretherapeutic acquisitions, SWI helped in positioning the target. For posttherapeutic sequences, SWI supposed position of the Vim matched the corresponding contrast enhancement seen at follow-up MRI. Additionally, on the patient's follow-up T1w images, we could observe a small area of contrast-enhancement corresponding to the target used in GKS (e.g., Vim), which belongs to the Ventral-Lateral-Ventral (VLV) nuclei group. Our clustering method resulted in seven thalamic groups. Conclusion: The use of SWI provided us with a superior resolution and an improved image contrast within the central gray matter, enabling us to directly visualize the Vim. We additionally propose a novel robust method for segmenting the thalamus in seven anatomical groups based on DWI. The localization of the GKS target on the follow-up T1w images, as well as the position of the Vim on 7 T, have been used as a gold standard for the validation of VLV cluster's emplacement. The contrast enhancement corresponding to the targeted area was always localized inside the expected cluster, providing strong evidence of the VLV segmentation accuracy. The anatomical correlation between the direct visualization on 7T and the current targeting methods on 3T (e.g., quadrilatere of Guyot, histological atlases, DWI) seems to show a very good anatomical matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 4πβ-γ coincidence counting method and its close relatives are widely used for the primary standardization of radioactivity. Both the general formalism and specific implementation of these methods have been well-documented. In particular, previous papers contain the extrapolation equations used for various decay schemes, methods for determining model parameters and, in some cases, tabulated uncertainty budgets. Two things often lacking from experimental reports are both the rationale for estimating uncertainties in a specific way and the details of exactly how a specific component of uncertainty was estimated. Furthermore, correlations among the components of uncertainty are rarely mentioned. To fill in these gaps, the present article shares the best-practices from a few practitioners of this craft. We explain and demonstrate with examples of how these approaches can be used to estimate the uncertainty of the reported massic activity. We describe uncertainties due to measurement variability, extrapolation functions, dead-time and resolving-time effects, gravimetric links, and nuclear and atomic data. Most importantly, a thorough understanding of the measurement system and its response to the decay under study can be used to derive a robust estimate of the measurement uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social insects are promising model systems for epigenetics due to their immense morphological and behavioral plasticity. Reports that DNA methylation differs between the queen and worker castes in social insects [1-4] have implied a role for DNA methylation in regulating division of labor. To better understand the function of DNA methylation in social insects, we performed whole-genome bisulfite sequencing on brains of the clonal raider ant Cerapachys biroi, whose colonies alternate between reproductive (queen-like) and brood care (worker-like) phases [5]. Many cytosines were methylated in all replicates (on average 29.5% of the methylated cytosines in a given replicate), indicating that a large proportion of the C. biroi brain methylome is robust. Robust DNA methylation occurred preferentially in exonic CpGs of highly and stably expressed genes involved in core functions. Our analyses did not detect any differences in DNA methylation between the queen-like and worker-like phases, suggesting that DNA methylation is not associated with changes in reproduction and behavior in C. biroi. Finally, many cytosines were methylated in one sample only, due to either biological or experimental variation. By applying the statistical methods used in previous studies [1-4, 6] to our data, we show that such sample-specific DNA methylation may underlie the previous findings of queen- and worker-specific methylation. We argue that there is currently no evidence that genome-wide variation in DNA methylation is associated with the queen and worker castes in social insects, and we call for a more careful interpretation of the available data.