926 resultados para Elementary Methods In Number Theory
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Objective: Sepsis is a common condition encountered in hospital environments. There is no effective treatment for sepsis, and it remains an important cause of death at intensive care units. This study aimed to discuss some methods that are available in clinics, and tests that have been recently developed for the diagnosis of sepsis. Methods: A systematic review was performed through the analysis of the following descriptors: sepsis, diagnostic methods, biological markers, and cytokines. Results: The deleterious effects of sepsis are caused by an imbalance between the invasiveness of the pathogen and the ability of the host to mount an effective immune response. Consequently, the host's immune surveillance fails to eliminate the pathogen, allowing it to spread. Moreover, there is a pro-inflammatory mediator release, inappropriate activation of the coagulation and complement cascades, leading to dysfunction of multiple organs and systems. The difficulty achieve total recovery of the patient is explainable. There is an increased incidence of sepsis worldwide due to factors such as aging population, larger number of surgeries, and number of microorganisms resistant to existing antibiotics. Conclusion: The search for new diagnostic markers associated with increased risk of sepsis development and molecules that can be correlated to certain steps of sepsis is becoming necessary. This would allow for earlier diagnosis, facilitate patient prognosis characterization, and prediction of possible evolution of each case. All other markers are regrettably constrained to research units.
Resumo:
OBJECTIVE: Sepsis is a common condition encountered in hospital environments. There is no effective treatment for sepsis, and it remains an important cause of death at intensive care units. This study aimed to discuss some methods that are available in clinics, and tests that have been recently developed for the diagnosis of sepsis. METHODS: A systematic review was performed through the analysis of the following descriptors: sepsis, diagnostic methods, biological markers, and cytokines. RESULTS: The deleterious effects of sepsis are caused by an imbalance between the invasiveness of the pathogen and the ability of the host to mount an effective immune response. Consequently, the host's immune surveillance fails to eliminate the pathogen, allowing it to spread. Moreover, there is a pro-inflammatory mediator release, inappropriate activation of the coagulation and complement cascades, leading to dysfunction of multiple organs and systems. The difficulty achieve total recovery of the patient is explainable. There is an increased incidence of sepsis worldwide due to factors such as aging population, larger number of surgeries, and number of microorganisms resistant to existing antibiotics. CONCLUSION: The search for new diagnostic markers associated with increased risk of sepsis development and molecules that can be correlated to certain steps of sepsis is becoming necessary. This would allow for earlier diagnosis, facilitate patient prognosis characterization, and prediction of possible evolution of each case. All other markers are regrettably constrained to research units.
Resumo:
In this thesis I described the theory and application of several computational methods in solving medicinal chemistry and biophysical tasks. I pointed out to the valuable information which could be achieved by means of computer simulations and to the possibility to predict the outcome of traditional experiments. Nowadays, computer represents an invaluable tool for chemists. In particular, the main topics of my research consisted in the development of an automated docking protocol for the voltage-gated hERG potassium channel blockers, and the investigation of the catalytic mechanism of the human peptidyl-prolyl cis-trans isomerase Pin1.
Resumo:
The Large Hadron Collider, located at the CERN laboratories in Geneva, is the largest particle accelerator in the world. One of the main research fields at LHC is the study of the Higgs boson, the latest particle discovered at the ATLAS and CMS experiments. Due to the small production cross section for the Higgs boson, only a substantial statistics can offer the chance to study this particle properties. In order to perform these searches it is desirable to avoid the contamination of the signal signature by the number and variety of the background processes produced in pp collisions at LHC. Much account assumes the study of multivariate methods which, compared to the standard cut-based analysis, can enhance the signal selection of a Higgs boson produced in association with a top quark pair through a dileptonic final state (ttH channel). The statistics collected up to 2012 is not sufficient to supply a significant number of ttH events; however, the methods applied in this thesis will provide a powerful tool for the increasing statistics that will be collected during the next LHC data taking.
Resumo:
The objective of this study was to estimate the potential of method restriction as a public health strategy in suicide prevention. Data from the Swiss Federal Statistical Office and the Swiss Institutes of Forensic Medicine from 2004 were gathered and categorized into suicide submethods according to accessibility to restriction of means. Of suicides in Switzerland, 39.2% are accessible to method restriction. The highest proportions were found in private weapons (13.2%), army weapons (10.4%), and jumps from hot-spots (4.6%). The presented method permits the estimation of the suicide prevention potential of a country by method restriction and the comparison of restriction potentials between suicide methods. In Switzerland, reduction of firearm suicides has the highest potential to reduce the total number of suicides.
Resumo:
It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.
Resumo:
Background. Similar to parent support in the home environment, teacher support at school may positively influence children's fruit and vegetable (FV) consumption. This study assessed the relationship between teacher support for FV consumption and the FV intake of 4th and 5th grade students in low-income elementary schools in central Texas. Methods. A secondary analysis was performed on baseline data collected from 496 parent-child dyads during the Marathon Kids study carried out by the Michael & Susan Dell Center for Healthy Living at the University of Texas School of Public Health. A hierarchical linear regression analysis adjusting for key demographic variables, parent support, and home FV availability was conducted. In addition, separate linear regression models stratified by quartiles of home FV availability were conducted to assess the relationship between teacher support and FV intake by level of home FV availability. Results. Teacher support was not significantly related to students' FV intake (p = .44). However, the interaction of teacher support and home FV availability was positively associated with students' FV consumption (p < .05). For students in the lowest quartile of home FV availability, teacher support accounted for approximately 6% of the FV intake variance (p = .02). For higher levels of FV availability, teacher support and FV intake were not related. Conclusions. For lower income elementary school-aged children with low FV availability at home, greater teacher support may lead to modest increases in FV consumption.^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
OBJECTIVE: To systematically review published literature to examine the complications associated with the use of misoprostol and compare these complications to those associated with other forms of abortion induction. ^ DATA SOURCES: Studies were identified through searches of medical literature databases including Medline (Ovid), PubMed (NLM), LILACS, sciELO, and AIM (AFRO), and review of references of relevant articles. ^ STUDY SELECTION AND METHODS: A descriptive systematic review that included studies reported in English and published before December 2012. Eligibility criteria included: misoprostol (with or without other methods) and any other method of abortion in a developing country, as well as quantitative data on the complication of each method. The following is information extracted from each study: author/year, country/city, study design/study sample, age range, setting of data collection, sample size, the method of abortion induction, the number of cases for each method, and the percentage of complications with each method. RESULTS: A total of 4 studies were identified (all in Latin America) describing post-abortion complications of misoprostol and other methods in countries where abortion is generally considered unsafe and/or illegal. The four studies reported on a range of complications including: bleeding, infection, incomplete abortion, intense pelvic pain, uterine perforation, headache, diarrhea, nausea, mechanical lesions, and systemic collapse. The most prevalent complications of misoprostol-induced abortion reported were: bleeding (7-82%), incomplete abortion (33-70%), and infection (0.8-67%). The prevalence of these complications reported from other abortion methods include: bleeding (16-25%), incomplete abortion (15-82%), and infection (13-50%). ^ CONCLUSION: The literature identified by this systematic review is inadequate for determining the complications of misoprostol used in unsafe settings. Abortion is considered an illicit behavior in these countries, therefore making it difficult to investigate the details needed to conduct a study on abortion complications. Given the differences between the reviewed studies as well as a variety of study limitations, it is not possible to draw firm conclusions about the rates of specific-abortion related complications.^
Resumo:
The increasing number of works related to the surface texture characterization based on 3D information, makes convenient rethinking traditional methods based on two-dimensional measurements from profiles. This work compares results between measurements obtained using two and three-dimensional methods. It uses three kinds of data sources: reference surfaces, randomly generated surfaces and measured. Preliminary results are presented. These results must be completed trying to cover a wider number of possibilities according to the manufacturing process and the measurement instrumentation since results can vary quite significantly between them.
Resumo:
Two different methods of analysis of plate bending, FEM and BM are discussed in this paper. The plate behaviour is assumed to be represented by using the linear thin plate theory where the Poisson-Kirchoff assumption holds. The BM based in a weighted mean square error technique produced good results for the problem of plate bending. The computational effort demanded in the BM is smaller than the one needed in a FEM analysis for the same level of accuracy. The general application of the FEM cannot be matched by the BM. Particularly, different types of geometry (plates of arbitrary geometry) need a similar but not identical treatment in the BM. However, this loss of generality is counterbalanced by the computational efficiency gained in the BM in the solution achievement
Resumo:
The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.
Resumo:
We outline here a proof that a certain rational function Cn(q, t), which has come to be known as the “q, t-Catalan,” is in fact a polynomial with positive integer coefficients. This has been an open problem since 1994. Because Cn(q, t) evaluates to the Catalan number at t = q = 1, it has also been an open problem to find a pair of statistics a, b on the collection