993 resultados para Statistical peak moments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vibrational density of states (VDOS) in a supercooled polydisperse liquid is computed by diagonalizing the Hessian matrix evaluated at the potential energy minima for systems with different values of polydispersity. An increase in polydispersity leads to an increase in the relative population of localized high-frequency modes. At low frequencies, the density of states shows an excess compared to the Debye squared-frequency law, which has been identified with the boson peak. The height of the boson peak increases with polydispersity and shows a rather narrow sensitivity to changes in temperature. While the modes comprising the boson peak appear to be largely delocalized, there is a sharp drop in the participation ratio of the modes that exist just below the boson peak indicative of the quasilocalized nature of the low-frequency vibrations. Study of the difference spectrum at two different polydispersity reveals that the increase in the height of boson peak is due to a population shift from modes with frequencies above the maximum in the VDOS to that below the maximum, indicating an increase in the fraction of the unstable modes in the system. The latter is further supported by the facilitation of the observed dynamics by polydispersity. Since the strength of the liquid increases with polydispersity, the present result provides an evidence that the intensity of boson peak correlates positively with the strength of the liquid, as observed earlier in many experimental systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerated aging experiments have been conducted on a representative oil-pressboard insulation model to investigate the effect of constant and sequential stresses on the PD behavior using a built-in phase resolved partial discharge analyzer. A cycle of the applied voltage starting from the zero of the positive half cycle was divided into 16 equal phase windows (Φ1 to Φ16) and partial discharge (PD) magnitude distribution in each phase was determined. Based on the experimental results, three stages of aging mechanism were identified. Gumbel's extreme value distribution of the largest element was used to model the first stage of aging process. Second and subsequent stages were modeled using two-parameter Weibull distribution. Spearman's non-parametric rank correlation test statistic and Kolmogrov-Smirnov two sample test were used to relate the aging process of each phase with the corresponding process of the full cycle. To bring out clearly the effect of stress level, its duration and test procedure on the distribution parameters and hence of the aging process, non-parametric ANOVA techniques like Kruskal-Wallis and Fisher's LSD multiple comparison tests were used. Results of the analysis show that two phases (Φ13 and Φ14) near the vicinity of the negative voltage peak were found to contribute significantly to the aging process and their aging mechanism also correlated well with that of the corresponding full cycle mechanism. Attempts have been made to relate these results with the published work of other workers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sequence of moments obtained from statistical trials encodes a classical probability distribution. However, it is well known that an incompatible set of moments arises in the quantum scenario, when correlation outcomes associated with measurements on spatially separated entangled states are considered. This feature, viz., the incompatibility of moments with a joint probability distribution, is reflected in the violation of Bell inequalities. Here, we focus on sequential measurements on a single quantum system and investigate if moments and joint probabilities are compatible with each other. By considering sequential measurement of a dichotomic dynamical observable at three different time intervals, we explicitly demonstrate that the moments and the probabilities are inconsistent with each other. Experimental results using a nuclear magnetic resonance system are reported here to corroborate these theoretical observations, viz., the incompatibility of the three-time joint probabilities with those extracted from the moment sequence when sequential measurements on a single-qubit system are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spanish Relativity Meeting (ERE 2014) Valencia, SPAIN, SEP 01-05, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the dynamics of protein folding via statistical energy-landscape theory. In particular, we concentrate on the local-connectivity case with the folding progress described by the fraction of native conformations. We found that the first passage-time (FPT) distribution undergoes a dynamic transition at a temperature below which the FPT distribution develops a power-law tail, a signature of the intermittent nonexponential kinetic phenomena for the folding dynamics. Possible applications to single-molecule dynamics experiments are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A refined theoretical approach has been developed to study the double-differential cross sections (DDCS's) in proton-helium collisions as a function of the ratio of ionized electron velocity to the incident proton velocity. The refinement is done in the present coupled-channel calculation by introducing a continuum distorted wave in the final state coupled with discrete states including direct as well as charge transfer channels. It is confirmed that the electron-capture-to-the-continuum (ECC) peak is slightly shifted to a lower electron velocity than the equivelocity position. Comparing measurements and classical trajectory Monte Carlo (CTMC) calculations at 10 and 20 keV proton energies, excellent agreement of the ECC peak heights is achieved at both energies. However, a minor disagreement in the peak positions between the present calculation and the CTMC results is noted. A smooth behavior of the DDCS is found in the present calculation on both sides of the peak whereas the CTMC results show some oscillatory behavior particularly to the left of the peak, associated with the statistical nature of CTMC calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To elucidate the heritability of peak density and spatial width of macular pigment (MP) using a Classical Twin Study.

Methods: Fundus autofluorescence images were obtained at 488?nm from 86 subjects or 43 twin pairs (21 monozygotic (MZ) and 22 dizygotic (DZ)) (27 male, 59 female) aged from 55 to 76 years (mean 62.2±5.3 years). The relative topographic distribution of MP was measured using a grey scale of intensity (0-255 units) in a 7° eccentricity around the fovea. Relative peak MP density (rPMPD) and relative spatial distribution of MP (rSDMP) were used as the main outcome measure in the statistical analysis.

Results: A significantly higher correlation was found within MZ pairs as compared with that within DZ pairs for rPMPD, (r=0.99, 95% confidence interval (95% CI) 0.93 to 1.00) and 0.22, 95% CI -0.34 to 0.71), respectively, suggesting strong heritability of this trait. When rSDMP was compared, there was no significant difference between the correlations within MZ pairs (r=0.48, 95% CI -0.02 to 0.83) and DZ pairs (r=0.63, 95% CI 0.32 to 0.83), thus rSDMP is unlikely to have a considerable heritable component. In addition, there was no difference between any MP parameter when normal maculae were compared with early age-related macular degeneration (AMD) (rPMPD 0.36 vs 0.34, t=1.18 P=0.243, rSDMP 1.75 vs 1.75, t=0.028 P=0.977).ConclusionsrPMPD is a strongly heritable trait whereas rSDMP has minimal genetic influence and a greater influence by environmental factors. The presence of macular changes associated with early AMD did not appear to influence any of these pigment parameters. © 2012 Macmillan Publishers Limited All rights reserved 0950-222X/12

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, wide-field sky surveys providing deep multi-band imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence of a sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for supernova cosmology, offering a standardizable candle good to an intrinsic scatter of 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light curve properties and an expanded grid of progenitor properties, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the characteristics of the shadowed fading observed in off-body communications channels at 5.8 GHz. This is realized with the aid of the $\kappa-\mu$ / gamma composite fading model which assumes that the transmitted signal undergoes $\kappa-\mu$ fading which is subject to \emph{multiplicative} shadowing. Based on this, the total power of the multipath components, including both the dominant and scattered components, is subject to non-negligible variations that follow the gamma distribution. For this model, we present an integral form of the probability density function (PDF) as well as important analytic expressions for the PDF, cumulative distribution function, moments and moment generating function. In the case of indoor off-body communications, the corresponding measurements were carried out in the context of four explicit individual scenarios namely: line of sight (LOS) and non-LOS (NLOS) walking, rotational and random movements. The measurements were repeated within three different indoor environments and considered three different hypothetical body worn node locations. With the aid of these results, the parameters for the $\kappa-\mu$ / gamma composite fading model were estimated and analyzed extensively. Interestingly, for the majority of the indoor environments and movement scenarios, the parameter estimates suggested that dominant signal components existed even when the direct signal path was obscured by the test subject's body. Additionally, it is shown that the $\kappa-\mu$ / gamma composite fading model provides an adequate fit to the fading effects involved in off-body communications channels. Using the Kullback-Leibler divergence, we have also compared our results with another recently proposed shadowed fading model, namely the $\kappa-\mu$ / lognormal LOS shadowed fading model. It was found that the $\kappa-\mu$ / gamma composite fading model provided a better fit for the majority of the scenarios considered in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the application of the simulated method of moments (SMM) for the estimation of nonlinear dynamic stochastic general equilibrium (DSGE) models. Monte Carlo analysis is employed to examine the small-sample properties of SMM in specifications with different curvature. Results show that SMM is computationally efficient and delivers accurate estimates, even when the simulated series are relatively short. However, asymptotic standard errors tend to overstate the actual variability of the estimates and, consequently, statistical inference is conservative. A simple strategy to incorporate priors in a method of moments context is proposed. An empirical application to the macroeconomic effects of rare events indicates that negatively skewed productivity shocks induce agents to accumulate additional capital and can endogenously generate asymmetric business cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial moments are extensively used in literature for modeling and analysis of lifetime data. In this paper, we study properties of partial moments using quantile functions. The quantile based measure determines the underlying distribution uniquely. We then characterize certain lifetime quantile function models. The proposed measure provides alternate definitions for ageing criteria. Finally, we explore the utility of the measure to compare the characteristics of two lifetime distributions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent analysis of the Arctic Oscillation (AO) in the stratosphere and troposphere has suggested that predictability of the state of the tropospheric AO may be obtained from the state of the stratospheric AO. However, much of this research has been of a purely qualitative nature. We present a more thorough statistical analysis of a long AO amplitude dataset which seeks to establish the magnitude of such a link. A relationship between the AO in the lower stratosphere and on the 1000 hPa surface on a 10-45 day time-scale is revealed. The relationship accounts for 5% of the variance of the 1000 hPa time series at its peak value and is significant at the 5% level. Over a similar time-scale the 1000 hPa time series accounts for 1% of itself and is not significant at the 5% level. Further investigation of the relationship reveals that it is only present during the winter season and in particular during February and March. It is also demonstrated that using stratospheric AO amplitude data as a predictor in a simple statistical model results in a gain of skill of 5% over a troposphere-only statistical model. This gain in skill is not repeated if an unrelated time series is included as a predictor in the model. Copyright © 2003 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Baking and 2-g mixograph analyses were performed for 55 cultivars (19 spring and 36 winter wheat) from various quality classes from the 2002 harvest in Poland. An instrumented 2-g direct-drive mixograph was used to study the mixing characteristics of the wheat cultivars. A number of parameters were extracted automatically from each mixograph trace and correlated with baking volume and flour quality parameters (protein content and high molecular weight glutenin subunit [HMW-GS] composition by SDS-PAGE) using multiple linear regression statistical analysis. Principal component analysis of the mixograph data discriminated between four flour quality classes, and predictions of baking volume were obtained using several selected mixograph parameters, chosen using a best subsets regression routine, giving R-2 values of 0.862-0.866. In particular, three new spring wheat strains (CHD 502a-c) recently registered in Poland were highly discriminated and predicted to give high baking volume on the basis of two mixograph parameters: peak bandwidth and 10-min bandwidth.