974 resultados para Data interpretation, statistical
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
At head of title: A Statistical abstract supplement.
Resumo:
At head of title, <1988>-1990: National data book and guide to sources.
Resumo:
Latest issue consulted: 126th ed. (2007).
Resumo:
Electricity market price forecast is a changeling yet very important task for electricity market managers and participants. Due to the complexity and uncertainties in the power grid, electricity prices are highly volatile and normally carry with spikes. which may be (ens or even hundreds of times higher than the normal price. Such electricity spikes are very difficult to be predicted. So far. most of the research on electricity price forecast is based on the normal range electricity prices. This paper proposes a data mining based electricity price forecast framework, which can predict the normal price as well as the price spikes. The normal price can be, predicted by a previously proposed wavelet and neural network based forecast model, while the spikes are forecasted based on a data mining approach. This paper focuses on the spike prediction and explores the reasons for price spikes based on the measurement of a proposed composite supply-demand balance index (SDI) and relative demand index (RDI). These indices are able to reflect the relationship among electricity demand, electricity supply and electricity reserve capacity. The proposed model is based on a mining database including market clearing price, trading hour. electricity), demand, electricity supply and reserve. Bayesian classification and similarity searching techniques are used to mine the database to find out the internal relationships between electricity price spikes and these proposed. The mining results are used to form the price spike forecast model. This proposed model is able to generate forecasted price spike, level of spike and associated forecast confidence level. The model is tested with the Queensland electricity market data with promising results. Crown Copyright (C) 2004 Published by Elsevier B.V. All rights reserved.
Resumo:
We present a data based statistical study on the effects of seasonal variations in the growth rates of the gastro-intestinal (GI) parasitic infection in livestock. The alluded growth rate is estimated through the variation in the number of eggs per gram (EPG) of faeces in animals. In accordance with earlier studies, our analysis too shows that rainfall is the dominant variable in determining EPG infection rates compared to other macro-parameters like temperature and humidity. Our statistical analysis clearly indicates an oscillatory dependence of EPG levels on rainfall fluctuations. Monsoon recorded the highest infection with a comparative increase of at least 2.5 times compared to the next most infected period (summer). A least square fit of the EPG versus rainfall data indicates an approach towards a super diffusive (i. e. root mean square displacement growing faster than the square root of the elapsed time as obtained for simple diffusion) infection growth pattern regime for low rainfall regimes (technically defined as zeroth level dependence) that gets remarkably augmented for large rainfall zones. Our analysis further indicates that for low fluctuations in temperature (true on the bulk data), EPG level saturates beyond a critical value of the rainfall, a threshold that is expected to indicate the onset of the nonlinear regime. The probability density functions (PDFs) of the EPG data show oscillatory behavior in the large rainfall regime (greater than 500 mm), the frequency of oscillation, once again, being determined by the ambient wetness (rainfall, and humidity). Data recorded over three pilot projects spanning three measures of rainfall and humidity bear testimony to the universality of this statistical argument. © 2013 Chattopadhyay and Bandyopadhyay.
Resumo:
The purpose of the work is to claim that engineers can be motivated to study statistical concepts by using the applications in their experience connected with Statistical ideas. The main idea is to choose a data from the manufacturing factility (for example, output from CMM machine) and explain that even if the parts used do not meet exact specifications they are used in production. By graphing the data one can show that the error is random but follows a distribution, that is, there is regularily in the data in statistical sense. As the error distribution is continuous, we advocate that the concept of randomness be introducted starting with continuous random variables with probabilities connected with areas under the density. The discrete random variables are then introduced in terms of decision connected with size of the errors before generalizing to abstract concept of probability. Using software, they can then be motivated to study statistical analysis of the data they encounter and the use of this analysis to make engineering and management decisions.
Resumo:
Technological advances combined with healthcare assistance bring increased risks related to patient safety, causing health institutions to be environments susceptible to losses in the provided care. Sectors of high complexity, such as Intensive Care Units have such characteristics highlighted due to being spaces designed for the care of patients in serious medical condition, when the use of advanced technological devices becomes a necessity. Thus, the aim of this study was to assess nursing care from the perspective of patient safety in intensive care units. This is an evaluative research, which combines various forms of data collection and analysis in order to conduct a deepened investigation. Data collection occurred in loco, from April to July 2014 in hospitals equipped with adult intensive care unit services. For this, a checklist instrument and semi-structured interviews conducted with patients, families, professionals were used in order to evaluate the structure-process-outcome triad. The instrument for nursing care assessment regarding Patient Safety included 97 questions related to structure and processes. Interviews provided data for outcome analysis. The selection of interviewees/participants was based on the willingness of potential participants. The following methods were used to collect data resulting from the instrument: statistical analysis of inter-rater reliability measure known as kappa (K); observations from judges resulting from the observation process; and added information obtained from the literature on the thematic. Data analysis from the interviews was carried out with IRAMUTEQ software, which used Descending Hierarchical Classification and Similarity analysis to aid in data interpretation. Research steps followed the ethical principles presented by Resolution No. 466 of December 12, 2012, and the results were presented in three manuscripts: 1) Evaluation of patient safety in Intensive Care Units: a focus on structure; 2) Health evaluation processes: a nursing care perspective on patient safety; 3) Patient safety in intensive care units: perception of nurses, family members and patients. The first article, related to the structure, refers to the use of 24 items of the employed instrument, showing that most of the findings were not aligned with the adequacy standards, which indicates poor conditions in structures offered in health services. The second article provides an analysis of the pillar of Processes, with the use of 73 items of the instrument, showing that 50 items did not meet the required standards for safe handling due to the absence of adequate scientific guidance and effective communication in nursing care process. For the third article, results indicate that intensive care units were safe places, yet urges for changes, especially in the physical structure and availability of materials and communication among professionals, patients and families. Therefore, our findings suggest that the nursing care being provided in the evaluated intensive care units contains troubling shortcomings with regards to patient safety, thereby evidencing an insecure setting for the assistance offered, in addition to a need for urgent measures to remedy the identified inadequacies with appropriate structures and implement protocols and care guidelines in order to consolidate an environment more favorable to patient safety.
Resumo:
Three types of forecasts of the total Australian production of macadamia nuts (t nut-in-shell) have been produced early each year since 2001. The first is a long-term forecast, based on the expected production from the tree census data held by the Australian Macadamia Society, suitably scaled up for missing data and assumed new plantings each year. These long-term forecasts range out to 10 years in the future, and form a basis for industry and market planning. Secondly, a statistical adjustment (termed the climate-adjusted forecast) is made annually for the coming crop. As the name suggests, climatic influences are the dominant factors in this adjustment process, however, other terms such as bienniality of bearing, prices and orchard aging are also incorporated. Thirdly, industry personnel are surveyed early each year, with their estimates integrated into a growers and pest-scouts forecast. Initially conducted on a 'whole-country' basis, these models are now constructed separately for the six main production regions of Australia, with these being combined for national totals. Ensembles or suites of step-forward regression models using biologically-relevant variables have been the major statistical method adopted, however, developing methodologies such as nearest-neighbour techniques, general additive models and random forests are continually being evaluated in parallel. The overall error rates average 14% for the climate forecasts, and 12% for the growers' forecasts. These compare with 7.8% for USDA almond forecasts (based on extensive early-crop sampling) and 6.8% for coconut forecasts in Sri Lanka. However, our somewhatdisappointing results were mainly due to a series of poor crops attributed to human reasons, which have now been factored into the models. Notably, the 2012 and 2013 forecasts averaged 7.8 and 4.9% errors, respectively. Future models should also show continuing improvement, as more data-years become available.
Resumo:
Multivariate normal distribution is commonly encountered in any field, a frequent issue is the missing values in practice. The purpose of this research was to estimate the parameters in three-dimensional covariance permutation-symmetric normal distribution with complete data and all possible patterns of incomplete data. In this study, MLE with missing data were derived, and the properties of the MLE as well as the sampling distributions were obtained. A Monte Carlo simulation study was used to evaluate the performance of the considered estimators for both cases when ρ was known and unknown. All results indicated that, compared to estimators in the case of omitting observations with missing data, the estimators derived in this article led to better performance. Furthermore, when ρ was unknown, using the estimate of ρ would lead to the same conclusion.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
The purpose of this study was to evaluate the dentin shear bond strength of four adhesive systems (Adper Single Bond 2, Adper Prompt L-Pop, Magic Bond DE and Self Etch Bond) in regards to buccal and lingual surfaces and dentin depth. Forty extracted third molars had roots removed and crowns bisected in the mesiodistal direction. The buccal and lingual surfaces were fixed in a PVC/acrylic resin ring and were divided into buccal and lingual groups assigned to each selected adhesive. The same specimens prepared for the evaluation of superficial dentin shear resistance were used to evaluate the different depths of dentin. The specimens were identified and abraded at depths of 0.5, 1.0, 1.5 and 2.0 mm. Each depth was evaluated by ISO TR 11405 using an EMIC-2000 machine regulated at 0.5 mm/min with a 200 Kgf load cell. We performed statistical analyses on the results (ANOVA, Tukey and Scheffé tests). Data revealed statistical differences (p < 0.01) in the adhesive and depth variation as well as adhesive/depth interactions. The Adper Single Bond 2 demonstrated the highest mean values of shear bond strength. The Prompt L-Pop product, a self-etching adhesive, revealed higher mean values compared with Magic Bond DE and Self Etch Bond adhesives, a total and self-etching adhesive respectively. It may be concluded that the shear bond strength of dentin is dependent on material (adhesive system), substrate depth and adhesive/depth interaction.
Resumo:
A questão da magnetização remanescente na interpretação de anomalias magnéticas é frequentemente negligenciada, principalmente em função da dificuldade em se lidar com a mesma. Na maioria dos casos, tanto nos trabalhos acadêmicos quanto nos modelos que circulam nos meios profissionais da exploração mineral e de petróleo, assume-se que a magnetização remanescente é desprezível e utiliza-se apenas a induzida. O presente artigo mostra que o uso desse parâmetro é particularmente importante no tocante às anomalias magnéticas brasileiras, e procura fornecer subsídios para o uso desta informação. Discute-se o uso de duas técnicas consagradas, a Redução ao Pólo e o Sinal Analítico, em anomalias brasileiras com e sem magnetização remanescente. Mostramos a aplicação da técnica de determinação da magnetização total, permitindo que os modelos sejam construídos a partir da resultante da soma das magnetizações induzida e remanescente, e posteriormente apresentamos uma metodologia de uso da informação remanescente na datação das rochas fonte.