922 resultados para sums of squares
Resumo:
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Resumo:
INTRODUCTION: Cadaver dogs are known as valuable forensic tools in crime scene investigations. Scientific research attempting to verify their value is largely lacking, specifically for scents associated with the early postmortem interval. The aim of our investigation was the comparative evaluation of the reliability, accuracy, and specificity of three cadaver dogs belonging to the Hamburg State Police in the detection of scents during the early postmortem interval. MATERIAL AND METHODS: Carpet squares were used as an odor transporting media after they had been contaminated with the scent of two recently deceased bodies (PMI<3h). The contamination occurred for 2 min as well as 10 min without any direct contact between the carpet and the corpse. Comparative searches by the dogs were performed over a time period of 65 days (10 min contamination) and 35 days (2 min contamination). RESULTS: The results of this study indicate that the well-trained cadaver dog is an outstanding tool for crime scene investigation displaying excellent sensitivity (75-100), specificity (91-100), and having a positive predictive value (90-100), negative predictive value (90-100) as well as accuracy (92-100).
Resumo:
This paper studied two different regression techniques for pelvic shape prediction, i.e., the partial least square regression (PLSR) and the principal component regression (PCR). Three different predictors such as surface landmarks, morphological parameters, or surface models of neighboring structures were used in a cross-validation study to predict the pelvic shape. Results obtained from applying these two different regression techniques were compared to the population mean model. In almost all the prediction experiments, both regression techniques unanimously generated better results than the population mean model, while the difference on prediction accuracy between these two regression methods is not statistically significant (α=0.01).
Resumo:
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)
Resumo:
We analyse a class of estimators of the generalized diffusion coefficient for fractional Brownian motion Bt of known Hurst index H, based on weighted functionals of the single time square displacement. We show that for a certain choice of the weight function these functionals possess an ergodic property and thus provide the true, ensemble-averaged, generalized diffusion coefficient to any necessary precision from a single trajectory data, but at expense of a progressively higher experimental resolution. Convergence is fastest around H ? 0.30, a value in the subdiffusive regime.
Resumo:
We give a partition of the critical strip, associated with each partial sum 1 + 2z + ... + nz of the Riemann zeta function for Re z < −1, formed by infinitely many rectangles for which a formula allows us to count the number of its zeros inside each of them with an error, at most, of two zeros. A generalization of this formula is also given to a large class of almost-periodic functions with bounded spectrum.