936 resultados para VITAL STATISTICS
Resumo:
With the rapid growth of information and communication technology (ICT) in Korea, there was a need to improve the quality of official ICT statistics. In order to do this, various factors had to be considered, such as the quality of surveying, processing, and output as well as the reputation of the statistical agency. We used PLS estimation to determine how these factors might influence customer satisfaction. Furthermore, through a comparison of associated satisfaction indices, we provided feedback to the responsible statistics agency. It appears that our model can be used as a tool for improving the quality of official ICT statistics. © 2008 Elsevier B.V. All rights reserved.
Resumo:
The present study investigated the relationship between statistics anxiety, individual characteristics (e.g., trait anxiety and learning strategies), and academic performance. Students enrolled in a statistics course in psychology (N=147) filled in a questionnaire on statistics anxiety, trait anxiety, interest in statistics, mathematical selfconcept, learning strategies, and procrastination. Additionally, their performance in the examination was recorded. The structural equation model showed that statistics anxiety held a crucial role as the strongest direct predictor of performance. Students with higher statistics anxiety achieved less in the examination and showed higher procrastination scores. Statistics anxiety was related indirectly to spending less effort and time on learning. Trait anxiety was related positively to statistics anxiety and, counterintuitively, to academic performance. This result can be explained by the heterogeneity of the measure of trait anxiety. The part of trait anxiety that is unrelated to the specific part of statistics anxiety correlated positively with performance.
Resumo:
The study of random dynamic systems usually requires the definition of an ensemble of structures and the solution of the eigenproblem for each member of the ensemble. If the process is carried out using a conventional numerical approach, the computational cost becomes prohibitive for complex systems. In this work, an alternative numerical method is proposed. The results for the response statistics are compared with values obtained from a detailed stochastic FE analysis of plates. The proposed method seems to capture the statistical behaviour of the response with a reduced computational cost.
Resumo:
Statistically planar turbulent partially premixed flames for different initial intensities of decaying turbulence have been simulated for global equivalence ratios = 0.7 and 1.0 using three-dimensional, simplified chemistry-based direct numerical simulations (DNS). The simulation parameters are chosen such that the flames represent the thin reaction zones regime combustion. A random bimodal distribution of equivalence ratio is introduced in the unburned gas ahead of the flame to account for the mixture inhomogeneity. The results suggest that the probability density functions (PDFs) of the mixture fraction gradient magnitude |Δξ| (i.e., P(|Δξ|)) can be reasonably approximated using a log-normal distribution. However, this presumed PDF distribution captures only the qualitative nature of the PDF of the reaction progress variable gradient magnitude |Δc| (i.e., P(|Δc|)). It has been found that a bivariate log-normal distribution does not sufficiently capture the quantitative behavior of the joint PDF of |Δξ| and |Δc| (i.e., P(|Δξ|, |Δc|)), and the agreement with the DNS data has been found to be poor in certain regions of the flame brush, particularly toward the burned gas side of the flame brush. Moreover, the variables |Δξ| and |Δc| show appreciable correlation toward the burned gas side of the flame brush. These findings are corroborated further using a DNS data of a lifted jet flame to study the flame geometry dependence of these statistics. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
BACKGROUND: A large proportion of students identify statistics courses as the most anxiety-inducing courses in their curriculum. Many students feel impaired by feelings of state anxiety in the examination and therefore probably show lower achievements. AIMS: The study investigates how statistics anxiety, attitudes (e.g., interest, mathematical self-concept) and trait anxiety, as a general disposition to anxiety, influence experiences of anxiety as well as achievement in an examination. SAMPLE: Participants were 284 undergraduate psychology students, 225 females and 59 males. METHODS: Two weeks prior to the examination, participants completed a demographic questionnaire and measures of the STARS, the STAI, self-concept in mathematics, and interest in statistics. At the beginning of the statistics examination, students assessed their present state anxiety by the KUSTA scale. After 25 min, all examination participants gave another assessment of their anxiety at that moment. Students' examination scores were recorded. Structural equation modelling techniques were used to test relationships between the variables in a multivariate context. RESULTS: Statistics anxiety was the only variable related to state anxiety in the examination. Via state anxiety experienced before and during the examination, statistics anxiety had a negative influence on achievement. However, statistics anxiety also had a direct positive influence on achievement. This result may be explained by students' motivational goals in the specific educational setting. CONCLUSIONS: The results provide insight into the relationship between students' attitudes, dispositions, experiences of anxiety in the examination, and academic achievement, and give recommendations to instructors on how to support students prior to and in the examination.
Resumo:
Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.
Resumo:
A mesoscopic Coulomb blockade system with two transport channels is studied in terms of full counting statistics. It is found that the shot noise and skewness are crucially affected by the quantum mechanical interference. In particular, the super-Poisson behavior can be induced as a consequence of constructive interference, and can be understood by the formation of effective fast-and-slow transport channels. Dephasing and finite temperature effects are carried out together with physical interpretations.
Resumo:
The grid is a foundation of reservoir description and reservoir simulation. The scale of grid size is vital influence for the precision of reservoir simulation the gridding of reservoir parameters require reasonable interpolation method with computing quickly and accurately. The improved distant weighted interpolation method has many properties, such as logical data points selection, exact interpolation, less calculation and simply programming, and its application can improve the precision of reservoir description and reservoir simulation. The Fractal geologic statistics describes scientifically the distribution law of various geological properties in reservoir. The Fractal interpolation method is applied in grid interpolation of reservoir parameters, and the result more accorded with the geological property and configuration of reservoir, and improved the rationality and quality of interpolation calculation. Incorporating the improved distant weighted interpolation method with Fractal interpolation method during mathematical model of grid-upscaling and grid-downscaling, the softwares of GROUGH(grid-upscaling) and GFINE (grid-downscaling) were developed aiming at the questions of grid-upscaling and grid-downscaling in reservoir description and reservoir simulation. The softwares of GROUGH and GFINE initial applied in the research of fined and large-scale reservoir simulation. It obtained fined distribution of remaining oil applying grid-upscaling and grid-downscaling technique in fined reservoir simulation of Es21-2 Shengtuo oilfield, and provided strongly and scientific basis for integral and comprehensive adjustment. It's a giant tertiary oil recovery pilot area in the alkaline/surfactant/polymer flooding pilot area of west district of Gudao oilfield, and first realized fined reservoir simulation of chemical flooding using grid-upscaling and grid-downscaling technique. It has wide applied foreground and significant research value aiming at the technique of grid-upscaling and grid-downscaling in reservoir description and reservoir simulation.
Resumo:
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method.
Resumo:
Janet Taylor, Ross D King, Thomas Altmann and Oliver Fiehn (2002). Application of metabolomics to plant genotype discrimination using statistics and machine learning. 1st European Conference on Computational Biology (ECCB). (published as a journal supplement in Bioinformatics 18: S241-S248).
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
Under natural viewing conditions small movements of the eye, head, and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they a restabilized on the retina for several seconds. However; it is unclear whether the physiological motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of the visua1 input to the retina and on the structure of neural activity in the early visual system. We show that fixational instability introduces a component in the retinal input signals that in the presence of natural images, lacks spatial correlations. This component strongly influences neural activity in a model of the LGN. It decorrelates cell responses even if the contrast sensitivity functions of simulated cells arc not perfectly tuned to counterbalance the power-law spectrum of natural images. A decorrelation of neural activity at the early stages of the visual system has been proposed to be beneficial for discarding statistical redundancies in the input signals. The results of this study suggest that fixational instability might contribute to establishing efficient representations of natural stimuli.
Resumo:
In this paper, we examine exchange rates in Vietnam’s transitional economy. Evidence of long-run equilibrium are established in most cases through a single co-integrating vector among endogenous variables that determine the real exchange rates. This supports relative PPP in which ECT of the system can be combined linearly into a stationary process, reducing deviation from PPP in the long run. Restricted coefficient vectors ß’ = (1, 1, -1) for real exchange rates of currencies in question are not rejected. This empirics of relative PPP adds to found evidences by many researchers, including Flre et al. (1999), Lee (1999), Johnson (1990), Culver and Papell (1999), Cuddington and Liang (2001). Instead of testing for different time series on a common base currency, we use different base currencies (USD, GBP, JPY and EUR). By doing so we want to know the whether theory may posit significant differences against one currency? We have found consensus, given inevitable technical differences, even with smallerdata sample for EUR. Speeds of convergence to PPP and adjustment are faster compared to results from other researches for developed economies, using both observed and bootstrapped HL measures. Perhaps, a better explanation is the adjustment from hyperinflation period, after which the theory indicates that adjusting process actually accelerates. We observe that deviation appears to have been large in early stages of the reform, mostly overvaluation. Over time, its correction took place leading significant deviations to gradually disappear.