979 resultados para Moderation statistical analysis
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of this study was to assess the cleaning capacity of the Protaper system using motor-driven or manual instrumentation. Materials and Methods: Ten mandibular molars were randomly separated into 2 groups (n = 5) according to the type of instrumentation performed, as follows: Group 1 - instrumentation with rotary nickel-titanium (Ni-Ti) files using ProTaper Universal System (Dentsply/Maillefer); and, Group 2 - instrumentation with Ni-Ti hand files using ProTaper Universal (Dentsply-Maillefer). Afterwards, the teeth were sectioned transversely and submitted to histotechnical processing to obtain histological sections for microscopic evaluation. The images were analyzed by the Corel Photo-Paint X5 program (Corel Corporation) using an integration grid superimposed on the image. Results: Statistical analysis (U-Mann-Whitney - P < 0.05) demonstrated that G1 presented higher cleaning capacity when compared to G2. Conclusions: The rotary technique presented better cleaning results in the apical third of the root canal system when compared to the manual technique.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.
Resumo:
Objective: To evaluate 16 patients of both sexes with lower overdenture and upper complete dentures, by analysing the resonance frequency of the initial and late stability of implants used to retain the overdenture under immediate loading. Background: Oral rehabilitation treatment with complete dentures using implants has been increasingly more common among the specialists in the oral rehabilitation area. This is an alternative for obtaining retention and stability in treatments involving conventional complete dentures, where two implants are enough to retain the overdenture satisfactorily. Materials and methods: The Osstell (TM) Mentor device was used for the analysis in the initial period (primary stability), 3 and 15 months after the installation of the lower overdenture (secondary stability). The statistical analysis was performed with the repeated measures model (p < 0.01). Results: The implant stability quotients were observed to increase after 15 months of the rehabilitating treatment. Conclusion: The use of overdentures over two lower implants should become the treatment of choice for individuals who have a fully edentulous mandible.
Resumo:
This study investigated the immunodetection of PTCH in epithelial components of dental follicles associated with impacted third molars without radiographic signs of pathosis. One hundred and five specimens of dental follicles associated with impacted third molars with incomplete rhizogenesis (between Nolla's stage 6 and 9) were surgically removed from 56 patients. Epithelial cell proliferation was determined by using immunohistochemical labeling. Statistical analysis was performed using Fisher exact test and a level of significance of 5%. Of the 105 dental follicles collected, 3 were PTCH-positive. The specimens with squamous metaplasia and epithelial hyperplasia had higher rates of positivity for PTCH, as well as those with active remnants of odontogenic epithelium. This study suggests that the odontogenic cells of the dental follicle might be proliferating during the rhizogenesis, while the squamous metaplasia and hyperplasia of the epithelial lining and proliferative odontogenic epithelial rests show the differentiation potential of dental follicles.
Resumo:
PURPOSE. The aim of the present study was to evaluate if a smaller morse taper abutment has a negative effect on the fracture resistance of implant-abutment connections under oblique compressive loads compared to a conventional abutment MATERIALS AND METHODS. Twenty morse taper conventional abutments (4.8 mm diameter) and smaller abutments (3.8 mm diameter) were tightened (20 Ncm) to their respective implants (3.5 x 11 mm) and after a 10 minute interval, implant/abutment assemblies were subjected to static compressive test, performed in a universal test machine with 1 mm/min displacement, at 45 degrees inclination. The maximum deformation force was determined. Data were statistically analyzed by student t test. RESULTS. Maximum deformation force of 4.8 mm and 3.8 mm abutments was approximately 95.33 kgf and 95.25 kgf, respectively, but no fractures were noted after mechanical test. Statistical analysis demonstrated that the evaluated abutments were statistically similar (P=.230). CONCLUSION. Abutment measuring 3.8 mm in diameter (reduced) presented mechanical properties similar to 4.8 mm (conventional) abutments, enabling its clinical use as indicated. [J Adv Prosthodont 2012;4:158-61]
Resumo:
Objective: To carry out an anatomical study of the axis with the use of computed tomography (CT) in children aged from two to ten years, measuring the lamina angle, lamina and pedicle length and thickness, and lateral mass length. Methods: Sixty-four CTs were studied from patients aged 24 to 120 months old, of both sexes and without any cervical anomaly. The measurements obtained were correlated with the data on age and sex of the patients. Statistical analysis was performed using the Students "t" tests. Results: We found that within the age range 24-48 months, 5.5% of the lamina and 8.3% of the pedicles had thicknesses of less than 3.5mm, which is the minimum thickness needed for insertion of the screw. Between 49 and 120 months, there were no lamina thicknesses of less than 3.5mm, and 1.2% of the pedicle thicknesses were less than 3.5mm values. Neither of the age groups had any lamina and pedicle lengths of less than 12mm, or lateral mass lengths greater than 12mm. Conclusion: The analysis of the data obtained demonstrates that most of the time, is possible to use a 3.5mm pedicle screw in the laminas and pedicles of the axis in children. Level of Evidence: II, Development of diagnostic criteria in consecutive patients.
Resumo:
Increasing age is associated with a reduction in overall heart rate variability as well as changes in complexity of physiologic dynamics. The aim of this study was to verify if the alterations in autonomic modulation of heart rate caused by the aging process could be detected by Shannon entropy (SE), conditional entropy (CE) and symbolic analysis (SA). Complexity analysis was carried out in 44 healthy subjects divided into two groups: old (n = 23, 63 +/- A 3 years) and young group (n = 21, 23 +/- A 2). It was analyzed SE, CE [complexity index (CI) and normalized CI (NCI)] and SA (0V, 1V, 2LV and 2ULV patterns) during short heart period series (200 cardiac beats) derived from ECG recordings during 15 min of rest in a supine position. The sequences characterized by three heart periods with no significant variations (0V), and that with two significant unlike variations (2ULV) reflect changes in sympathetic and vagal modulation, respectively. The unpaired t test (or Mann-Whitney rank sum test when appropriate) was used in the statistical analysis. In the aging process, the distributions of patterns (SE) remain similar to young subjects. However, the regularity is significantly different; the patterns are more repetitive in the old group (a decrease of CI and NCI). The amounts of pattern types are different: 0V is increased and 2LV and 2ULV are reduced in the old group. These differences indicate marked change of autonomic regulation. The CE and SA are feasible techniques to detect alteration in autonomic control of heart rate in the old group.
Resumo:
Introduction: Neuroimaging has been widely used in studies to investigate depression in the elderly because it is a noninvasive technique, and it allows the detection of structural and functional brain alterations. Fractional anisotropy (FA) and mean diffusivity (MD) are neuroimaging indexes of the microstructural integrity of white matter, which are measured using diffusion tensor imaging (DTI). The aim of this study was to investigate differences in FA or MD in the entire brain without a previously determined region of interest (ROI) between depressed and non-depressed elderly patients. Method: Brain magnetic resonance imaging scans were obtained from 47 depressed elderly patients, diagnosed according to DSM-IV criteria, and 36 healthy elderly patients as controls. Voxelwise statistical analysis of FA data was performed using tract-based spatial statistics (TBSS). Results: After controlling for age, no significant differences among FA and MD parameters were observed in the depressed elderly patients. No significant correlations were found between cognitive performance and FA or MD parameters. Conclusion: There were no significant differences among FA or MD values between mildly or moderately depressed and non-depressed elderly patients when the brain was analyzed without a previously determined ROI. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
In biological world, life of cells is guaranteed by their ability to sense and to respond to a large variety of internal and external stimuli. In particular, excitable cells, like muscle or nerve cells, produce quick depolarizations in response to electrical, mechanical or chemical stimuli: this means that they can change their internal potential through a quick exchange of ions between cytoplasm and the external environment. This can be done thanks to the presence of ion channels, proteins that span the lipid bilayer and act like switches, allowing ionic current to flow opening and shutting in a stochastic way. For a particular class of ion channels, ligand-gated ion channels, the gating processes is strongly influenced by binding between receptive sites located on the channel surface and specific target molecules. These channels, inserted in biomimetic membranes and in presence of a proper electronic system for acquiring and elaborating the electrical signal, could give us the possibility of detecting and quantifying concentrations of specific molecules in complex mixtures from ionic currents across the membrane; in this thesis work, this possibility is investigated. In particular, it reports a description of experiments focused on the creation and the characterization of artificial lipid membranes, the reconstitution of ion channels and the analysis of their electrical and statistical properties. Moreover, after a chapter about the basis of the modelling of the kinetic behaviour of ligand gated ion channels, a possible approach for the estimation of the target molecule concentration, based on a statistical analysis of the ion channel open probability, is proposed. The fifth chapter contains a description of the kinetic characterisation of a ligand gated ion channel: the homomeric α2 isoform of the glycine receptor. It involved both experimental acquisitions and signal analysis. The last chapter represents the conclusions of this thesis, with some remark on the effective performance that may be achieved using ligand gated ion channels as sensing elements.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
The evaluation of the farmers’ communities’ approach to the Slow Food vision, their perception of the Slow Food role in supporting their activity and their appreciation and expectations from participating in the event of Mother Earth were studied. The Unified Theory of Acceptance and Use of Technology (UTAUT) model was adopted in an agro-food sector context. A survey was conducted, 120 questionnaires from farmers attending the Mother Earth in Turin in 2010 were collected. The descriptive statistical analysis showed that both Slow Food membership and participation to Mother Earth Meeting were much appreciated for the support provided to their business and the contribution to a more sustainable and fair development. A positive social, environmental and psychological impact on farmers also resulted. Results showed also an interesting perspective on the possible universality of the Slow Food and Mother Earth values. Farmers declared that Slow Food is supporting them by preserving the biodiversity and orienting them to the use of local resources and reducing the chemical inputs. Many farmers mentioned the language/culture and administration/bureaucratic issues as an obstacle to be a member in the movement and to participate to the event. Participation to Mother Earth gives an opportunity to exchange information with other farmers’ communities and to participate to seminars and debates, helpful for their business development. The absolute majority of positive answers associated to the farmers’ willingness to relate to Slow Food and participate to the next Mother Earth editions negatively influenced the UTAUT model results. A factor analysis showed that the variables associated to the UTAUT model constructs Performance Expectancy and Effort Expectancy were consistent, able to explain the construct variability, and their measurement reliable. Their inclusion in a simplest Technology Acceptance Model could be considered in future researches.
Resumo:
The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.