974 resultados para Data Interpretation, Statistical


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims To determine the degree of inter-institutional agreement in the assessment of dobutamine stress echocardiograms using modern stress echo cardiographic technology in combination with standardized data acquisition and assessment criteria. Method and Results Among six experienced institutions, 150 dobutamine stress echocardiograms (dobutamine up to 40 mug.kg(-1) min(-1) and atropine up to I mg) were performed on patients with suspected coronary artery disease using fundamental and harmonic imaging following a consistent digital acquisition protocol. Each dobutamine stress echocardiogram was assessed at every institution regarding endocardial visibility and left ventricular wall motion without knowledge of any other data using standardized reading criteria. No patients were excluded due to poor image quality or inadequate stress level. Coronary angiography was performed within 4 weeks. Coronary angiography demonstrated significant coronary artery disease (less than or equal to50% diameter stenosis) in 87 patients. Using harmonic imaging an average of 5.2+/-0.9 institutions agreed on dobutamine stress echocardiogram results as being normal or abnormal (mean kappa 0.55; 95% CI 0.50-0.60). Agreement was higher in patients with no (equal assessment of dobutamine stress echocardiogram results by 5.5 +/- 0.8 institutions) or three-vessel coronary artery disease (5.4 +/- 0.8 institutions) and lower in one- or two- vessel disease (5.0 +/- 0.9 and 5.2 +/- 1.0 institutions, respectively-, P=0.041). Disagreement on test results was greater in only minor wall motion abnormalities. Agreement on dobutamine stress echocardiogram results was lower using fundamental imaging (mean kappa 0.49; 95% CI 0.44-0.54; P

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a research work, the goal of which was to achieve a model for the evaluation of data quality in institutional websites of health units in a broad and balanced way. We have carried out a literature review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we have also carried out a Delphi method process with experts in order to reach an adequate set of attributes and their respective weights for the measurement of content quality. The results obtained revealed a high level of consensus among the experts who participated in the Delphi process. On the other hand, the different statistical analysis and techniques implemented are robust and attach confidence to our results and consequent model obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an investigation into cloud-to-ground lightning activity over the continental territory of Portugal with data collected by the national Lightning Location System. The Lightning Location System in Portugal is first presented. Analyses about geographical, seasonal, and polarity distribution of cloud-to-ground lightning activity and cumulative probability of peak current are carried out. An overall ground flash density map is constructed from the database, which contains the information of more than five years and almost four million records. This map is compared with the thunderstorm days map, produced by the Portuguese Institute of Meteorology, and with the orographic map of Portugal. Finally, conclusions are duly drawn.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies assessing the effects of a given exposure variable and a specific outcome of interest, confusion may arise from the mistaken impression that the exposure variable is producing the outcome of interest, when in fact the observed effect is due to an existing confounder. However, quantitative techniques are rarely used to determine the potential influence of unmeasured confounders. Sensitivity analysis is a statistical technique that allows to quantitatively measuring the impact of an unmeasured confounding variable on the association of interest that is being assessed. The purpose of this study was to make it feasible to apply two sensitivity analysis methods available in the literature, developed by Rosenbaum and Greenland, using an electronic spreadsheet. Thus, it can be easier for researchers to include this quantitative tool in the set of procedures that have been commonly used in the stage of result validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Fatty liver disease (FLD) is an increasing prevalent disease that can be reversed if detected early. Ultrasound is the safest and ubiquitous method for identifying FLD. Since expert sonographers are required to accurately interpret the liver ultrasound images, lack of the same will result in interobserver variability. For more objective interpretation, high accuracy, and quick second opinions, computer aided diagnostic (CAD) techniques may be exploited. The purpose of this work is to develop one such CAD technique for accurate classification of normal livers and abnormal livers affected by FLD. METHODS: In this paper, the authors present a CAD technique (called Symtosis) that uses a novel combination of significant features based on the texture, wavelet transform, and higher order spectra of the liver ultrasound images in various supervised learning-based classifiers in order to determine parameters that classify normal and FLD-affected abnormal livers. RESULTS: On evaluating the proposed technique on a database of 58 abnormal and 42 normal liver ultrasound images, the authors were able to achieve a high classification accuracy of 93.3% using the decision tree classifier. CONCLUSIONS: This high accuracy added to the completely automated classification procedure makes the authors' proposed technique highly suitable for clinical deployment and usage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The species abundance distribution (SAD) has been a central focus of community ecology for over fifty years, and is currently the subject of widespread renewed interest. The gambin model has recently been proposed as a model that provides a superior fit to commonly preferred SAD models. It has also been argued that the model's single parameter () presents a potentially informative ecological diversity metric, because it summarises the shape of the SAD in a single number. Despite this potential, few empirical tests of the model have been undertaken, perhaps because the necessary methods and software for fitting the model have not existed. Here, we derive a maximum likelihood method to fit the model, and use it to undertake a comprehensive comparative analysis of the fit of the gambin model. The functions and computational code to fit the model are incorporated in a newly developed free-to-download R package (gambin). We test the gambin model using a variety of datasets and compare the fit of the gambin model to fits obtained using the Poisson lognormal, logseries and zero-sum multinomial distributions. We found that gambin almost universally provided a better fit to the data and that the fit was consistent for a variety of sample grain sizes. We demonstrate how can be used to differentiate intelligibly between community structures of Azorean arthropods sampled in different land use types. We conclude that gambin presents a flexible model capable of fitting a wide variety of observed SAD data, while providing a useful index of SAD form in its single fitted parameter. As such, gambin has wide potential applicability in the study of SADs, and ecology more generally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the spatial intensity of urban violence events using wavelet-based methods and emergency room data. METHODS: Information on victims attended at the emergency room of a public hospital in the city of So Paulo, Southeastern Brazil, from January 1, 2002 to January 11, 2003 were obtained from hospital records. The spatial distribution of 3,540 events was recorded and a uniform random procedure was used to allocate records with incomplete addresses. Point processes and wavelet analysis technique were used to estimate the spatial intensity, defined as the expected number of events by unit area. RESULTS: Of all georeferenced points, 59% were accidents and 40% were assaults. There is a non-homogeneous spatial distribution of the events with high concentration in two districts and three large avenues in the southern area of the city of So Paulo. CONCLUSIONS: Hospital records combined with methodological tools to estimate intensity of events are useful to study urban violence. The wavelet analysis is useful in the computation of the expected number of events and their respective confidence bands for any sub-region and, consequently, in the specification of risk estimates that could be used in decision-making processes for public policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perante a evoluo constante da Internet, a sua utilizao quase obrigatria. Atravs da web, possvel conferir extractos bancrios, fazer compras em pases longnquos, pagar servios sem sair de casa, entre muitos outros. H inmeras alternativas de utilizao desta rede. Ao se tornar to til e prxima das pessoas, estas comearam tambm a ganhar mais conhecimentos informticos. Na Internet, esto tambm publicados vrios guias para intruso ilcita em sistemas, assim como manuais para outras prticas criminosas. Este tipo de informao, aliado crescente capacidade informtica do utilizador, teve como resultado uma alterao nos paradigmas de segurana informtica actual. Actualmente, em segurana informtica a preocupao com o hardware menor, sendo o principal objectivo a salvaguarda dos dados e continuidade dos servios. Isto deve-se fundamentalmente dependncia das organizaes nos seus dados digitais e, cada vez mais, dos servios que disponibilizam online. Dada a mudana dos perigos e do que se pretende proteger, tambm os mecanismos de segurana devem ser alterados. Torna-se necessrio conhecer o atacante, podendo prever o que o motiva e o que pretende atacar. Neste contexto, props-se a implementao de sistemas de registo de tentativas de acesso ilcitas em cinco instituies de ensino superior e posterior anlise da informao recolhida com auxlio de tcnicas de data mining (minerao de dados). Esta soluo pouco utilizada com este intuito em investigao, pelo que foi necessrio procurar analogias com outras reas de aplicao para recolher documentao relevante para a sua implementao. A soluo resultante revelou-se eficaz, tendo levado ao desenvolvimento de uma aplicao de fuso de logs das aplicaes Honeyd e Snort (responsvel tambm pelo seu tratamento, preparao e disponibilizao num ficheiro Comma Separated Values (CSV), acrescentando conhecimento sobre o que se pode obter estatisticamente e revelando caractersticas teis e previamente desconhecidas dos atacantes. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurana, tais como firewalls e Intrusion Detection Systems (IDS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente estudo diz respeito a um trabalho de pesquisa no mbito de uma Tese de Mestrado compreendida no segundo ciclo de estudos do curso de Engenharia Geotcnica e Geoambiente, realizado sobre a contribuio da Fluorescncia de Raios X (FRX) no Zonamento de Georrecursos, com particular nfase para a utilizao do instrumento porttil e de ferramentas tecnolgicas de vanguarda, indispensveis prospeco e explorao dos recursos minerais, designadamente na interpretao e integrao de dados de natureza geolgica e na modelao de mtodos de explorao e processamento /tratamento de depsitos minerais, assim como do respectivo controlo. Esta dissertao discute os aspectos fundamentais da utilizao da tcnica de Fluorescncia de Raios-X (porttil, FRXP), quanto sua possibilidade de aplicao e metodologia exigida, com vista definio de zonas com caractersticas qumicas anlogas do georrecurso e que preencham as exigncias especificadas para a utilizao da matria-prima, nas indstrias consumidoras. Foi elaborada uma campanha de recolha de amostras de calcrio proveniente da pedreira do Sangardo, em CondeixaaNova, que numa primeira fase teve como objectivo principal a identificao da composio qumica da rea em estudo e o grau de preciso do aparelho porttil de FRX. Para alm desta anlise foram, tambm, realizadas anlises granulomtricas por peneirao e sedimentao por Raios-X a amostras provenientes das bacias de sedimentao e do material passado no filtro prensa. Validado o mtodo de anlise por FRXP, realizou-se uma segunda fase deste trabalho, que consistiu na elaborao de uma amostragem bastante significativa de pontos, onde se realizaram anlises por FRXP, de forma a obter uma maior cobertura qumica da rea em estudo e localizar os locais chave de explorao da matria-prima. Para uma correcta leitura dos dados analisados recorreu-se a ferramentas aliadas s novas tecnologias, as quais se mostraram um importante contributo para uma boa gesto do georrecurso em avaliao, nomeadamente o XLSTAT e o Surfer para tratamento estatstico dos dados e modelao, respectivamente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tongue is the most important and dynamic articulator for speech formation, because of its anatomic aspects (particularly, the large volume of this muscular organ comparatively to the surrounding organs of the vocal tract) and also due to the wide range of movements and flexibility that are involved. In speech communication research, a variety of techniques have been used for measuring the three-dimensional vocal tract shapes. More recently, magnetic resonance imaging (MRI) becomes common; mainly, because this technique allows the collection of a set of static and dynamic images that can represent the entire vocal tract along any orientation. Over the years, different anatomical organs of the vocal tract have been modelled; namely, 2D and 3D tongue models, using parametric or statistical modelling procedures. Our aims are to present and describe some 3D reconstructed models from MRI data, for one subject uttering sustained articulations of some typical Portuguese sounds. Thus, we present a 3D database of the tongue obtained by stack combinations with the subject articulating Portuguese vowels. This 3D knowledge of the speech organs could be very important; especially, for clinical purposes (for example, for the assessment of articulatory impairments followed by tongue surgery in speech rehabilitation), and also for a better understanding of acoustic theory in speech formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.