12 resultados para Statistical modeling technique

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we carry out robust modeling and influence diagnostics in Birnbaum-Saunders (BS) regression models. Specifically, we present some aspects related to BS and log-BS distributions and their generalizations from the Student-t distribution, and develop BS-t regression models, including maximum likelihood estimation based on the EM algorithm and diagnostic tools. In addition, we apply the obtained results to real data from insurance, which shows the uses of the proposed model. Copyright (c) 2011 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a modeling technique for small-signal stability assessment of unbalanced power systems is presented. Since power distribution systems are inherently unbalanced, due to its lines and loads characteristics, and the penetration of distributed generation into these systems is increasing nowadays, such a tool is needed in order to ensure a secure and reliable operation of these systems. The main contribution of this paper is the development of a phasor-based model for the study of dynamic phenomena in unbalanced power systems. Using an assumption on the net torque of the generator, it is possible to precisely define an equilibrium point for the phasor model of the system, thus enabling its linearization around this point, and, consequently, its eigenvalue/eigenvector analysis for small-signal stability assessment. The modeling technique presented here was compared to the dynamic behavior observed in ATP simulations and the results show that, for the generator and controller models used, the proposed modeling approach is adequate and yields reliable and precise results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use a recently developed computerized modeling technique to explore the long-term impacts of indigenous Amazonian hunting in the past, present, and future. The model redefines sustainability in spatial and temporal terms, a major advance over the static "sustainability indices" currently used to study hunting in tropical forests. We validate the model's projections against actual field data from two sites in contemporary Amazonia and use the model to assess various management scenarios for the future of Manu National Park in Peru. We then apply the model to two archaeological contexts, show how its results may resolve long-standing enigmas regarding native food taboos and primate biogeography, and reflect on the ancient history and future of indigenous people in the Amazon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: This study evaluated the influence of the cavity configuration factor ("C-Factor") and light activation technique on polymerization contraction forces of a Bis-GMA-based composite resin (Charisma, Heraeus Kulzer). Material and Methods: Three different pairs of steel moving bases were connected to a universal testing machine (Emic DL 500): groups A and B - 2x2 mm (CF=0.33), groups C and D - 3x2 mm (CF=0.66), groups E and F - 6x2 mm (CF=1.5). After adjustment of the height between the pair of bases so that the resin had a volume of 12 mm(3) in all groups, the material was inserted and polymerized by two different methods: pulse delay (100 mW/cm(2) for 5 s, 40 s interval, 600 mW/cm(2) for 20 s) and continuous pulse (600 mW/cm(2) for 20 s). Each configuration was light cured with both techniques. Tensions generated during polymerization were recorded by 120 s. The values were expressed in curves (Force(N) x Time(s)) and averages compared by statistical analysis (ANOVA and Tukey's test, p<0.05). Results: For the 2x2 and 3x2 bases, with a reduced C-Factor, significant differences were found between the light curing methods. For 6x2 base, with high C-Factor, the light curing method did not influence the contraction forces of the composite resin. Conclusions: Pulse delay technique can determine less stress on tooth/restoration interface of adhesive restorations only when a reduced C-Factor is present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Describe a modified technique to increase nostril cross-sectional area using rib and septal cartilage graft over alar nasal cartilages. METHODS: A modified surgical technique was used to obtain, carve and insert cartilage grafts over alar nasal cartilages. This study used standardized pictures and measured 90 cadaveric nostril cross-sectional area using Autocad (c); 30 were taken before any procedure and 60 were taken after grafts over lateral crura (30 using costal cartilage and 30 using septal cartilage). Statistical analysis were assessed using a model for repeated measures and ANOVA (Analysis of Variance) for the variable "area". RESULTS: There's statistical evidence that rib cartilage graft is more effective than septal cartilage graft. The mean area after the insertion of septal cartilage graft is smaller than the mean area under rib graft treatment (no confidence interval for mean difference contains the zero value and all P-values are below the significance level of 5%). CONCLUSIONS: The technique presented is applicable to increase nostril cross section area in cadavers. This modified technique revealed to enhance more nostril cross section area with costal cartilage graft over lateral crura rather than by septal graft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transplantation brings hope for many patients. A multidisciplinary approach on this field aims at creating biologically functional tissues to be used as implants and prostheses. The freeze-drying process allows the fundamental properties of these materials to be preserved, making future manipulation and storage easier. Optimizing a freeze-drying cycle is of great importance since it aims at reducing process costs while increasing product quality of this time-and-energy-consuming process. Mathematical modeling comes as a tool to help a better understanding of the process variables behavior and consequently it helps optimization studies. Freeze-drying microscopy is a technique usually applied to determine critical temperatures of liquid formulations. It has been used in this work to determine the sublimation rates of a biological tissue freeze-drying. The sublimation rates were measured from the speed of the moving interface between the dried and the frozen layer under 21.33, 42.66 and 63.99 Pa. The studied variables were used in a theoretical model to simulate various temperature profiles of the freeze-drying process. Good agreement between the experimental and the simulated results was found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.