966 resultados para Maximum entropy methods
Resumo:
OBJECTIVE: The aim of the present study was to determine the in vitro maximum inhibitory dilution (MID) of two chlorhexidinebased oral mouthwashes (CHX): Noplak®, Periogard®, and one polyhexamethylene biguanide-based mouthwash (PHMB): Sanifill Premium® against 28 field Staphylococcus aureus strains using the agar dilution method. MATERIALS AND METHODS: For each product, decimal dilutions ranging from 1/10 to 1/655,360 were prepared in distilled water and added to Mueller Hinton Agar culture medium. After homogenization, the culture medium was poured onto Petri dishes. Strains were inoculated using a Steers multipoint inoculator and dishes were incubated at 37ºC for 24hours. For reading, MID was considered as the maximum dilution of the mouthwash still capable of inhibiting microbial growth. RESULTS: Sanifill Premium® inhibited the growth of all strains at 1/40 dilution and of 1 strain at 1/80 dilution. Noplak® inhibited the growth of 23 strains at 1/640 dilution and of all 28 strains at 1/320 dilution. Periogard® showed inhibited growth of 7 strains at 1/640 dilution and of all 28 strains at 1/320 dilution. Data were submitted to Kruskal-Wallis statistical test, showing significant differences between the mouthwashes evaluated (p<0.05). No significant difference was found between Noplak® and Periogard® (p>0.05). Sanifill Premium® was the least effective (p<0.05). CONCLUSION: It was concluded that CHX-based mouthwashes present better antimicrobial activity against S. Aureus than the PHMB-based mouthwash.
Resumo:
This study compared the mandibular displacement from three methods of centric relation record using an anterior jig associated with (A) chin point guidance, (B) swallowing (control group) and (C) bimanual manipulation. Ten patients aged 25-39 years were selected if they met the following inclusion criteria: complete dentition (up to the second molars), Angle class I and absence of signs and symptoms of temporomandibular disorders and diagnostic casts showing stability in the maximum intercuspation (MI) position. Impressions of maxillary and mandibular arches were made with an irreversible hydrocolloid impression material. Master casts of each patient were obtained, mounted on a microscope table in MI as a reference position and 5 records of each method were made per patient. The mandibular casts were then repositioned with records interposed and new measurements were obtained. The difference between the two readings allowed measuring the displacement of the mandible in the anteroposterior and lateral axes. Data were analyzed statistically by ANOVA and Tukey's test at 5% significance level. There was no statistically significant differences (p>0.05) among the three methods for measuring lateral displacement (A=0.38 ± 0.26, B=0.32 ± 0.25 and C=0.32 ± 0.23). For the anteroposterior displacement (A=2.76 ± 1.43, B=2.46 ± 1.48 and C=2.97 ± 1.51), the swallowing method (B) differed significantly from the others (p<0.05), but no significant difference (p>0.05) was found between chin point guidance (A) and bimanual manipulation (C). In conclusion, the swallowing method produced smaller mandibular posterior displacement than the other methods.
Resumo:
Objective: To measure condylar displacement between centric relation (CR) and maximum intercuspation (MIC) in symptomatic and asymptomatic subjects. Materials and Methods: The sample comprised 70 non-deprogrammed individuals, divided equally into two groups, one symptomatic and the other asymptomatic, grouped according to the research diagnostic criteria for temporomandibular disorders (RDC/TMD). Condylar displacement was measured in three dimensions with the condylar position indicator (CPI) device. Dahlberg's index, intraclass correlation coefficient, repeated measures analysis of variance, analysis of variance, and generalized estimating equations were used for statistical analysis. Results: A greater magnitude of difference was observed on the vertical plane on the left side in both symptomatic and asymptomatic individuals (P = .033). The symptomatic group presented higher measurements on the transverse plane (P = .015). The percentage of displacement in the mesial direction was significantly higher in the asymptomatic group than in the symptomatic one (P = .049). Both groups presented a significantly higher percentage of mesial direction on the right side than on the left (P = .036). The presence of bilateral condylar displacement (left and right sides) in an inferior and distal direction was significantly greater in symptomatic individuals (P = .012). However, no statistical difference was noted between genders. Conclusion: Statistically significant differences between CR and MIC were quantifiable at the condylar level in asymptomatic and symptomatic individuals. (Angle Orthod. 2010;80:835-842.)
Resumo:
Aerosol samples were collected at a pasture site in the Amazon Basin as part of the project LBA-SMOCC-2002 (Large-Scale Biosphere-Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall and Climate: Aerosols from Biomass Burning Perturb Global and Regional Climate). Sampling was conducted during the late dry season, when the aerosol composition was dominated by biomass burning emissions, especially in the submicron fraction. A 13-stage Dekati low-pressure impactor (DLPI) was used to collect particles with nominal aerodynamic diameters (D(p)) ranging from 0.03 to 0.10 mu m. Gravimetric analyses of the DLPI substrates and filters were performed to obtain aerosol mass concentrations. The concentrations of total, apparent elemental, and organic carbon (TC, EC(a), and OC) were determined using thermal and thermal-optical analysis (TOA) methods. A light transmission method (LTM) was used to determine the concentration of equivalent black carbon (BC(e)) or the absorbing fraction at 880 nm for the size-resolved samples. During the dry period, due to the pervasive presence of fires in the region upwind of the sampling site, concentrations of fine aerosols (D(p) < 2.5 mu m: average 59.8 mu g m(-3)) were higher than coarse aerosols (D(p) > 2.5 mu m: 4.1 mu g m(-3)). Carbonaceous matter, estimated as the sum of the particulate organic matter (i.e., OC x 1.8) plus BC(e), comprised more than 90% to the total aerosol mass. Concentrations of EC(a) (estimated by thermal analysis with a correction for charring) and BC(e) (estimated by LTM) averaged 5.2 +/- 1.3 and 3.1 +/- 0.8 mu g m(-3), respectively. The determination of EC was improved by extracting water-soluble organic material from the samples, which reduced the average light absorption Angstrom exponent of particles in the size range of 0.1 to 1.0 mu m from >2.0 to approximately 1.2. The size-resolved BC(e) measured by the LTM showed a clear maximum between 0.4 and 0.6 mu m in diameter. The concentrations of OC and BC(e) varied diurnally during the dry period, and this variation is related to diurnal changes in boundary layer thickness and in fire frequency.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
The ability to predict leaf area and leaf area index is crucial in crop simulation models that predict crop growth and yield. Previous studies have shown existing methods of predicting leaf area to be inadequate when applied to a broad range of cultivars with different numbers of leaves. The objectives of the study were to (i) develop generalised methods of modelling individual and total plant leaf area, and leaf senescence, that do not require constants that are specific to environments and/or genotypes, (ii) re-examine the base, optimum, and maximum temperatures for calculation of thermal time for leaf senescence, and (iii) assess the method of calculation of individual leaf area from leaf length and leaf width in experimental work. Five cultivars of maize differing widely in maturity and adaptation were planted in October 1994 in south-eastern Queensland, and grown under non-limiting conditions of water and plant nutrient supplies. Additional data for maize plants with low total leaf number (12-17) grown at Katumani Research Centre, Kenya, were included to extend the range in the total leaf number per plant. The equation for the modified (slightly skewed) bell curve could be generalised for modelling individual leaf area, as all coefficients in it were related to total leaf number. Use of coefficients for individual genotypes can be avoided, and individual and total plant leaf area can be calculated from total leaf number. A single, logistic equation, relying on maximum plant leaf area and thermal time from emergence, was developed to predict leaf senescence. The base, optimum, and maximum temperatures for calculation of thermal time for leaf senescence were 8, 34, and 40 degrees C, and apply for the whole crop-cycle when used in modelling of leaf senescence. Thus, the modelling of leaf production and senescence is simplified, improved, and generalised. Consequently, the modelling of leaf area index (LAI) and variables that rely on LAI will be improved. For experimental purposes, we found that the calculation of leaf area from leaf length and leaf width remains appropriate, though the relationship differed slightly from previously published equations.
Resumo:
Functional magnetic resonance imaging (FMRI) analysis methods can be quite generally divided into hypothesis-driven and data-driven approaches. The former are utilised in the majority of FMRI studies, where a specific haemodynamic response is modelled utilising knowledge of event timing during the scan, and is tested against the data using a t test or a correlation analysis. These approaches often lack the flexibility to account for variability in haemodynamic response across subjects and brain regions which is of specific interest in high-temporal resolution event-related studies. Current data-driven approaches attempt to identify components of interest in the data, but currently do not utilise any physiological information for the discrimination of these components. Here we present a hypothesis-driven approach that is an extension of Friman's maximum correlation modelling method (Neurolmage 16, 454-464, 2002) specifically focused on discriminating the temporal characteristics of event-related haemodynamic activity. Test analyses, on both simulated and real event-related FMRI data, will be presented.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Purpose: Many methods exist in the literature for identifying PEEP to set in ARDS patients following a lung recruitment maneuver (RM). We compared ten published parameters for setting PEEP following a RM. Methods: Lung injury was induced by bilateral lung lavage in 14 female Dorset sheep, yielding a PaO(2) 100-150 mmHg at F(I)O(2) 1.0 and PEEP 5 cmH(2)O. A quasi-static P-V curve was then performed using the supersyringe method; PEEP was set to 20 cmH(2)O and a RM performed with pressure control ventilation (inspiratory pressure set to 40-50 cmH(2)O), until PaO(2) + PaCO(2) > 400 mmHg. Following the RM, a decremental PEEP trial was performed. The PEEP was decreased in 1 cmH(2)O steps every 5 min until 15 cmH(2)O was reached. Parameters measured during the decremental PEEP trial were compared with parameters obtained from the P-V curve. Results: For setting PEEP, maximum dynamic tidal respiratory compliance, maximum PaO(2), maximum PaO(2) + PaCO(2), and minimum shunt calculated during the decremental PEEP trial, and the lower Pflex and point of maximal compliance increase on the inflation limb of the P-V curve (Pmci,i) were statistically indistinguishable. The PEEP value obtained using the deflation upper Pflex and the point of maximal compliance decrease on the deflation limb were significantly higher, and the true inflection point on the inflation limb and minimum PaCO(2) were significantly lower than the other variables. Conclusion: In this animal model of ARDS, dynamic tidal respiratory compliance, maximum PaO(2), maximum PaO(2) + PaCO(2), minimum shunt, inflation lower Pflex and Pmci,i yield similar values for PEEP following a recruitment maneuver.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincareacute return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing delta(1). However, for higher values of delta(1) the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter zeta) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model.
Resumo:
Deoxyribonucleic acid, or DNA, is the most fundamental aspect of life but present day scientific knowledge has merely scratched the surface of the problem posed by its decoding. While experimental methods provide insightful clues, the adoption of analysis tools supported by the formalism of mathematics will lead to a systematic and solid build-up of knowledge. This paper studies human DNA from the perspective of system dynamics. By associating entropy and the Fourier transform, several global properties of the code are revealed. The fractional order characteristics emerge as a natural consequence of the information content. These properties constitute a small piece of scientific knowledge that will support further efforts towards the final aim of establishing a comprehensive theory of the phenomena involved in life.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente
Resumo:
Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.