39 resultados para Statistical methodologies
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The use of an adequate method for evaluation of the adhesion of root canal filling materials provides more reliable results to allow comparison of the materials and substantiate their clinical choice. The aims of this study were to compare the shear bond strength (SBS) test and push-out test for evaluation of the adhesion of an epoxy-based endodontic sealer (AH Plus) to dentin and gutta-percha, and to assess the failure modes on the debonded surfaces by means of scanning electron microscopy (SEM). Three groups were established (n=7): in group 1, root cylinders obtained from human canines were embedded in acrylic resin and had their canals prepared and filled with sealer; in group 2, longitudinal sections of dentin cylinders were embedded in resin with the canal surface smoothed and turned upwards; in group 3, gutta-percha cylinders were embedded in resin. Polyethylene tubes filled with sealer were positioned on the polished surface of the specimens (groups 2 and 3). The push-out test (group 1) and the SBS test (groups 2 and 3) were performed in an Instron universal testing machine running at crosshead speed of 1 mm/min. Means (±SD) in MPa were: G1 (8.8±1.13), G2 (5.9±1.05) and G3 (3.8±0.55). Statistical analysis by ANOVA and Student's t-test (a=0.05) revealed statistically significant differences (p<0.01) among the groups. SEM analysis showed a predominance of adhesive and mixed failures of AH Plus sealer. The tested surface affected significantly the results with the sealer reaching higher bond strength to dentin than to gutta-percha with the SBS test. The comparison of the employed methodologies showed that the SBS test produced significantly lower bond strength values than the push-out test, was skilful in determining the adhesion of AH Plus sealer to dentin and gutta-percha, and required specimens that could be easily prepared for SEM, presenting as a viable alternative for further experiments.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
Esta pesquisa objetivou analisar metodologias para elaboração de índices de preços para o transporte de cargas. O estudo das principais fórmulas da Teoria Econômica culminou com a conclusão que os Índices de Fisher e Walsh são aqueles capazes de atender ao maior número de preceitos lógicos, estatísticos e econômicos. Em seguida, surgem os índices geométricos de Törnqvist, Vartia e Theil. Os Índices de Laspeyres e Paasche, apesar de apresentarem algumas limitações, acabam sendo amplamente utilizados, graças a maior capacidade de operacionalização. A pesquisa apresentou um estudo de caso para o transporte da soja em grão. Foram realizados quatro tratamentos. Obteve-se a variação acumulada no nível geral de preços para o transporte rodoviário de soja no Brasil, no período entre fevereiro de 1998 e março de 2002. De acordo com os resultados, essa variação acumulada teria sido de 76%.
Resumo:
O envelhecimento populacional é um fato marcante da transição demográfica. O estudo das causas básicas em idosos permite visualizar seu perfil epidemiológico, embora possa ser prejudicado pela alta proporção de causas mal definidas. O objetivo deste trabalho é descrever a mortalidade dos idosos por essas causas no Brasil. A fonte dos dados foi o Sistema de Informações sobre Mortalidade do Ministério da Saúde.Entre as variáveis, a principal modalidade foi a causa básica mal definida [ Capítulo XVIII da Classificação Estatística Internacional de Doenças e Problemas Relacionados à Saúde-Décima Revisão (CID-10)]. O decréscimo desses óbitos em idosos foi de 35 por cento entre 1996 e 2005.Considerando os óbitos de 60 a 69 anos e os de 80 e mais anos, as proporções de mal definidos aumentaram em 9,9 por cento e 14,8 por cento, respectivamente, no ano de 2005. Métodos visando a sua diminuição são sugeridos, salientando-se que o fato mais importante é o de os médicos preencherem adequadamente as declarações de óbito- com as reais causas básicas, conseqüênciais e terminais-, objetivo maior dos estudiosos
Resumo:
As estatísticas de mortalidade são usadas em epidemiologia e saúde pública como indicador de nível de saúde, em avaliações de programas de saúde e em estudos populacionais visando a comparar tendências temporais e diferenças geográficas. Uma das variáveis utilizadas nesse tipo de análise é a causa básica da morte. Entretanto, existem críticas quanto à qualidade das estatísticas baseadas nas causas de morte declaradas pelos médicos nos atestados de óbito. O objetivo deste artigo é refletir sobre a fidedignidade das causas de morte declaradas pelos médicos nos atestados de óbito, com base em estudos realizados segundo diferentes metodologias, e comenta a validade das estatísticas de mortalidade segundo causas
Resumo:
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.
Resumo:
Background: Head and neck squamous cell carcinoma (HNSCC) is one of the most common malignancies in humans. The average 5-year survival rate is one of the lowest among aggressive cancers, showing no significant improvement in recent years. When detected early, HNSCC has a good prognosis, but most patients present metastatic disease at the time of diagnosis, which significantly reduces survival rate. Despite extensive research, no molecular markers are currently available for diagnostic or prognostic purposes. Methods: Aiming to identify differentially-expressed genes involved in laryngeal squamous cell carcinoma (LSCC) development and progression, we generated individual Serial Analysis of Gene Expression (SAGE) libraries from a metastatic and non-metastatic larynx carcinoma, as well as from a normal larynx mucosa sample. Approximately 54,000 unique tags were sequenced in three libraries. Results: Statistical data analysis identified a subset of 1,216 differentially expressed tags between tumor and normal libraries, and 894 differentially expressed tags between metastatic and non-metastatic carcinomas. Three genes displaying differential regulation, one down-regulated (KRT31) and two up-regulated (BST2, MFAP2), as well as one with a non-significant differential expression pattern (GNA15) in our SAGE data were selected for real-time polymerase chain reaction (PCR) in a set of HNSCC samples. Consistent with our statistical analysis, quantitative PCR confirmed the upregulation of BST2 and MFAP2 and the downregulation of KRT31 when samples of HNSCC were compared to tumor-free surgical margins. As expected, GNA15 presented a non-significant differential expression pattern when tumor samples were compared to normal tissues. Conclusion: To the best of our knowledge, this is the first study reporting SAGE data in head and neck squamous cell tumors. Statistical analysis was effective in identifying differentially expressed genes reportedly involved in cancer development. The differential expression of a subset of genes was confirmed in additional larynx carcinoma samples and in carcinomas from a distinct head and neck subsite. This result suggests the existence of potential common biomarkers for prognosis and targeted-therapy development in this heterogeneous type of tumor.
Resumo:
We show that the one-loop effective action at finite temperature for a scalar field with quartic interaction has the same renormalized expression as at zero temperature if written in terms of a certain classical field phi(c), and if we trade free propagators at zero temperature for their finite-temperature counterparts. The result follows if we write the partition function as an integral over field eigenstates (boundary fields) of the density matrix element in the functional Schrodinger field representation, and perform a semiclassical expansion in two steps: first, we integrate around the saddle point for fixed boundary fields, which is the classical field phi(c), a functional of the boundary fields; then, we perform a saddle-point integration over the boundary fields, whose correlations characterize the thermal properties of the system. This procedure provides a dimensionally reduced effective theory for the thermal system. We calculate the two-point correlation as an example.
Resumo:
We propose a statistical model to account for the gel-fluid anomalous phase transitions in charged bilayer- or lamellae-forming ionic lipids. The model Hamiltonian comprises effective attractive interactions to describe neutral-lipid membranes as well as the effect of electrostatic repulsions of the discrete ionic charges on the lipid headgroups. The latter can be counterion dissociated (charged) or counterion associated (neutral), while the lipid acyl chains may be in gel (low-temperature or high-lateral-pressure) or fluid (high-temperature or low-lateral-pressure) states. The system is modeled as a lattice gas with two distinct particle types-each one associated, respectively, with the polar-headgroup and the acyl-chain states-which can be mapped onto an Ashkin-Teller model with the inclusion of cubic terms. The model displays a rich thermodynamic behavior in terms of the chemical potential of counterions (related to added salt concentration) and lateral pressure. In particular, we show the existence of semidissociated thermodynamic phases related to the onset of charge order in the system. This type of order stems from spatially ordered counterion association to the lipid headgroups, in which charged and neutral lipids alternate in a checkerboard-like order. Within the mean-field approximation, we predict that the acyl-chain order-disorder transition is discontinuous, with the first-order line ending at a critical point, as in the neutral case. Moreover, the charge order gives rise to continuous transitions, with the associated second-order lines joining the aforementioned first-order line at critical end points. We explore the thermodynamic behavior of some physical quantities, like the specific heat at constant lateral pressure and the degree of ionization, associated with the fraction of charged lipid headgroups.
Resumo:
We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.
Resumo:
The solvent effects on the low-lying absorption spectrum and on the (15)N chemical shielding of pyrimidine in water are calculated using the combined and sequential Monte Carlo simulation and quantum mechanical calculations. Special attention is devoted to the solute polarization. This is included by an iterative procedure previously developed where the solute is electrostatically equilibrated with the solvent. In addition, we verify the simple yet unexplored alternative of combining the polarizable continuum model (PCM) and the hybrid QM/MM method. We use PCM to obtain the average solute polarization and include this in the MM part of the sequential QM/MM methodology, PCM-MM/QM. These procedures are compared and further used in the discrete and the explicit solvent models. The use of the PCM polarization implemented in the MM part seems to generate a very good description of the average solute polarization leading to very good results for the n-pi* excitation energy and the (15)N nuclear chemical shield of pyrimidine in aqueous environment. The best results obtained here using the solute pyrimidine surrounded by 28 explicit water molecules embedded in the electrostatic field of the remaining 472 molecules give the statistically converged values for the low lying n-pi* absorption transition in water of 36 900 +/- 100 (PCM polarization) and 36 950 +/- 100 cm(-1) (iterative polarization), in excellent agreement among one another and with the experimental value observed with a band maximum at 36 900 cm(-1). For the nuclear shielding (15)N the corresponding gas-water chemical shift obtained using the solute pyrimidine surrounded by 9 explicit water molecules embedded in the electrostatic field of the remaining 491 molecules give the statistically converged values of 24.4 +/- 0.8 and 28.5 +/- 0.8 ppm, compared with the inferred experimental value of 19 +/- 2 ppm. Considering the simplicity of the PCM over the iterative polarization this is an important aspect and the computational savings point to the possibility of dealing with larger solute molecules. This PCM-MM/QM approach reconciles the simplicity of the PCM model with the reliability of the combined QM/MM approaches.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
This article intends to contribute to the reflection on the Educational Statistics as being source for the researches on History of Education. The main concern was to reveal the way Educational Statistics related to the period from 1871 to 1931 were produced, in central government. Official reports - from the General Statistics Directory - and Statistics yearbooks released by that department were analyzed and, on this analysis, recommendations and definitions to perform the works were sought. By rending problematic to the documental issues on Educational Statistics and their usual interpretations, the intention was to reduce the ignorance about the origin of the school numbers, which are occasionally used in current researches without the convenient critical exam.
Resumo:
This study presents the results of a mature landfill leachate treated by a homogeneous catalytic ozonation process with ions Fe(2+) and Fe(3+) at acidic pH. Quality assessments were performed using Taguchi`s method (L(8) design). Strong synergism was observed statistically between molecular ozone and ferric ions, pointing to their catalytic effect on (center dot)OH generation. The achievement of better organic matter depollution rates requires an ozone flow of 5 L h(-1) (590 mg h(-1) O(3)) and a ferric ion concentration of 5 mg L(-1).
Resumo:
This work presents a statistical study on the variability of the mechanical properties of hardened self-compacting concrete, including the compressive strength, splitting tensile strength and modulus of elasticity. The comparison of the experimental results with those derived from several codes and recommendations allows evaluating if the hardened behaviour of self-compacting concrete can be appropriately predicted by the existing formulations. The variables analyzed include the maximum size aggregate, paste and gravel content. Results from the analyzed self-compacting concretes presented variability measures in the same range than the expected for conventional vibrated concrete, with all the results within a confidence level of 95%. From several formulations for conventional concrete considered in this study, it was observed that a safe estimation of the modulus of elasticity can be obtained from the value of compressive strength; with lower strength self-compacting concretes presenting higher safety margins. However, most codes overestimate the material tensile strength. (C) 2010 Elsevier Ltd. All rights reserved.