79 resultados para Audio Data set
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
This paper is concerned with the computational efficiency of fuzzy clustering algorithms when the data set to be clustered is described by a proximity matrix only (relational data) and the number of clusters must be automatically estimated from such data. A fuzzy variant of an evolutionary algorithm for relational clustering is derived and compared against two systematic (pseudo-exhaustive) approaches that can also be used to automatically estimate the number of fuzzy clusters in relational data. An extensive collection of experiments involving 18 artificial and two real data sets is reported and analyzed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
There is a family of well-known external clustering validity indexes to measure the degree of compatibility or similarity between two hard partitions of a given data set, including partitions with different numbers of categories. A unified, fully equivalent set-theoretic formulation for an important class of such indexes was derived and extended to the fuzzy domain in a previous work by the author [Campello, R.J.G.B., 2007. A fuzzy extension of the Rand index and other related indexes for clustering and classification assessment. Pattern Recognition Lett., 28, 833-841]. However, the proposed fuzzy set-theoretic formulation is not valid as a general approach for comparing two fuzzy partitions of data. Instead, it is an approach for comparing a fuzzy partition against a hard referential partition of the data into mutually disjoint categories. In this paper, generalized external indexes for comparing two data partitions with overlapping categories are introduced. These indexes can be used as general measures for comparing two partitions of the same data set into overlapping categories. An important issue that is seldom touched in the literature is also addressed in the paper, namely, how to compare two partitions of different subsamples of data. A number of pedagogical examples and three simulation experiments are presented and analyzed in details. A review of recent related work compiled from the literature is also provided. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we introduce a parametric model for handling lifetime data where an early lifetime can be related to the infant-mortality failure or to the wear processes but we do not know which risk is responsible for the failure. The maximum likelihood approach and the sampling-based approach are used to get the inferences of interest. Some special cases of the proposed model are studied via Monte Carlo methods for size and power of hypothesis tests. To illustrate the proposed methodology, we introduce an example consisting of a real data set.
A bivariate regression model for matched paired survival data: local influence and residual analysis
Resumo:
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Flash points (T(FP)) of hydrocarbons are calculated from their flash point numbers, N(FP), with the relationship T(FP) (K) = 23.369N(FP)(2/3) + 20.010N(FP)(1/3) + 31.901 In turn, the N(FP) values can be predicted from experimental boiling point numbers (Y(BP)) and molecular structure with the equation N(FP) = 0.987 Y(BP) + 0.176D + 0.687T + 0.712B - 0.176 where D is the number of olefinic double bonds in the structure, T is the number of triple bonds, and B is the number of aromatic rings. For a data set consisting of 300 diverse hydrocarbons, the average absolute deviation between the literature and predicted flash points was 2.9 K.
Resumo:
This article deals with the scavenging processes modeling of the particulate sulfate and the gas sulfur dioxide, emphasizing the synoptic conditions at different sampling sites in order to verify the domination of the in-cloud or below-cloud scavenging processes in the Metropolitan Area of São Paulo (RMSP). Three sampling sites were chosen: GV (Granja Viana) at RMSP surroundings, IAG-USP and Mackenzie (RMSP center). Basing on synoptic conditions, it was chosen a group of events where the numerical modeling, a simple scavenging model, was used. These synoptic conditions were usually convective cloud storms, which are usual at RMSP. The results show that the in-cloud processes were dominant (80%) for sulfate/sulfur dioxide scavenging processes, with below-cloud process indicating around 20% of the total. Clearly convective events, with total rainfall higher than 20 mm, are better modeled than the stratiform events, with correlation coefficient of 0.92. There is also a clear association with events presenting higher rainfall amount and the ratio between modeled and observed data set with correlation coefficient of 0.63. Additionally, the suburb sampling site, GV, as expected due to the pollution source distance, presents in general smaller amount of rainwater sulfate (modeled and observed) than the center sampling site, Mackenzie, where the characterization event explains partially the rainfall concentration differences.
Resumo:
The aim of the present study was to evaluate the effect of soil characteristics (pH, macro- and micro-nutrients), environmental factors (temperature, humidity, period of the year and time of day of collection) and meteorological conditions (rain, sun, cloud and cloud/rain) on the flavonoid content of leaves of Passiflora incarnata L., Passifloraceae. The total flavonoid contents of leaf samples harvested from plants cultivated or collected under different conditions were quantified by high-performance liquid chromatography with ultraviolet detection (HPLC-UV/PAD). Chemometric treatment of the data by principal component (PCA) and hierarchic cluster analyses (HCA) showed that the samples did not present a specific classification in relation to the environmental and soil variables studied, and that the environmental variables were not significant in describing the data set. However, the levels of the elements Fe, B and Cu present in the soil showed an inverse correlation with the total flavonoid contents of the leaves of P. incarnata.
Resumo:
OBJETIVO: Estimar a prevalência de defeitos congênitos (DC) em uma coorte de nascidos vivos (NV) vinculando-se os bancos de dados do Sistema de Informação de Mortalidade (SIM) e do Sistema de Informação sobre Nascidos Vivos (SINASC). MÉTODOS: Estudo descritivo para avaliar as declarações de nascido vivo como fonte de informação sobre DC. A população de estudo é uma coorte de NV hospitalares do 1º semestre de 2006 de mães residentes e ocorridos no Município de São Paulo no período de 01/01/2006 a 30/06/2006, obtida por meio da vinculação dos bancos de dados das declarações de nascido vivo e óbitos neonatais provenientes da coorte. RESULTADOS: Os DC mais prevalentes segundo o SINASC foram: malformações congênitas (MC) e deformidades do aparelho osteomuscular (44,7%), MC do sistema nervoso (10,0%) e anomalias cromossômicas (8,6%). Após a vinculação, houve uma recuperação de 80,0% de indivíduos portadores de DC do aparelho circulatório, 73,3% de DC do aparelho respiratório e 62,5% de DC do aparelho digestivo. O SINASC fez 55,2% das notificações de DC e o SIM notificou 44,8%, mostrando-se importante para a recuperação de informações de DC. Segundo o SINASC, a taxa de prevalência de DC na coorte foi de 75,4%00 NV; com os dados vinculados com o SIM, essa taxa passou para 86,2%00 NV. CONCLUSÕES: A complementação de dados obtida pela vinculação SIM/SINASC fornece um perfil mais real da prevalência de DC do que aquele registrado pelo SINASC, que identifica os DC mais visíveis, enquanto o SIM identifica os mais letais, mostrando a importância do uso conjunto das duas fontes de dados.
Resumo:
O objetivo deste estudo foi apresentar e discutir a utilização das medidas de associação: razão de chances e razão de prevalências, em dados obtidos de estudo transversal realizado em 2001-2002, utilizando-se amostra estratificada por conglomerados em dois estágios (n=1.958). As razões de chances e razões de prevalências foram estimadas por meio de regressão logística não condicional e regressão de Poisson, respectivamente, utilizando-se o pacote estatístico Stata 7.0. Intervalos de confiança e efeitos do desenho foram considerados na avaliação da precisão das estimativas. Dois desfechos do estudo transversal com diferentes níveis de prevalência foram avaliados: vacinação contra influenza (66,1%) e doença pulmonar referida (6,9%). Na situação em que a prevalência foi alta, as estimativas das razões de prevalência foram mais conservadoras com intervalos de confiança menores. Na avaliação do desfecho de baixa prevalência, não se observaram grandes diferenças numéricas entre as estimações das razões de chances e razões de prevalência e erros-padrão obtidos por uma ou outra técnica. O efeito do desenho maior que a unidade indicou que a amostragem complexa, em ambos os casos, aumentou da variância das estimativas. Cabe ao pesquisador a escolha da técnica e do estimador mais adequado ao seu objeto de estudo, permanecendo a escolha no âmbito epidemiológico.
Resumo:
Objetiva-se neste trabalho discutir a viabilidade da regulação subnacional do saneamento básico no País de acordo com o estabelecido na Lei no 11.445/2007. Foi analisada a viabilidade da regulação municipal em 2.523 municípios, com base na amostra do Sistema Nacional de Informações em Saneamento (SNIS) referente a 2005, mediante a aplicação de taxas de regulação de 1 a 3% do faturamento das concessionárias. Concluiu-se que a regulação local não apresenta viabilidade em 97% dos municípios pesquisados.
Resumo:
Despite the valuable contributions of robotics and high-throughput approaches to protein crystallization, the role of an experienced crystallographer in the evaluation and rationalization of a crystallization process is still crucial to obtaining crystals suitable for X-ray diffraction measurements. In this work, the difficult task of crystallizing the flavoenzyme l-amino-acid oxidase purified from Bothrops atrox snake venom was overcome by the development of a protocol that first required the identification of a non-amorphous precipitate as a promising crystallization condition followed by the implementation of a methodology that combined crystallization in the presence of oil and seeding techniques. Crystals were obtained and a complete data set was collected to 2.3 A resolution. The crystals belonged to space group P2(1), with unit-cell parameters a = 73.64, b = 123.92, c = 105.08 A, beta = 96.03 degrees. There were four protein subunits in the asymmetric unit, which gave a Matthews coefficient V (M) of 2.12 A3 Da-1, corresponding to 42% solvent content. The structure has been solved by molecular-replacement techniques.
Resumo:
Background: Microarray techniques have become an important tool to the investigation of genetic relationships and the assignment of different phenotypes. Since microarrays are still very expensive, most of the experiments are performed with small samples. This paper introduces a method to quantify dependency between data series composed of few sample points. The method is used to construct gene co-expression subnetworks of highly significant edges. Results: The results shown here are for an adapted subset of a Saccharomyces cerevisiae gene expression data set with low temporal resolution and poor statistics. The method reveals common transcription factors with a high confidence level and allows the construction of subnetworks with high biological relevance that reveals characteristic features of the processes driving the organism adaptations to specific environmental conditions. Conclusion: Our method allows a reliable and sophisticated analysis of microarray data even under severe constraints. The utilization of systems biology improves the biologists ability to elucidate the mechanisms underlying celular processes and to formulate new hypotheses.