135 resultados para Semi-parametric models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
In this article, we introduce a semi-parametric Bayesian approach based on Dirichlet process priors for the discrete calibration problem in binomial regression models. An interesting topic is the dosimetry problem related to the dose-response model. A hierarchical formulation is provided so that a Markov chain Monte Carlo approach is developed. The methodology is applied to simulated and real data.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Although many mathematical models exist predicting the dynamics of transposable elements (TEs), there is a lack of available empirical data to validate these models and inherent assumptions. Genomes can provide a snapshot of several TE families in a single organism, and these could have their demographics inferred by coalescent analysis, allowing for the testing of theories on TE amplification dynamics. Using the available genomes of the mosquitoes Aedes aegypti and Anopheles gambiae, we indicate that such an approach is feasible. Our analysis follows four steps: (1) mining the two mosquito genomes currently available in search of TE families; (2) fitting, to selected families found in (1), a phylogeny tree under the general time-reversible (GTR) nucleotide substitution model with an uncorrelated lognormal (UCLN) relaxed clock and a nonparametric demographic model; (3) fitting a nonparametric coalescent model to the tree generated in (2); and (4) fitting parametric models motivated by ecological theories to the curve generated in (3).
Resumo:
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Resumo:
We discuss the estimation of the expected value of the quality-adjusted survival, based on multistate models. We generalize an earlier work, considering the sojourn times in health states are not identically distributed, for a given vector of covariates. Approaches based on semiparametric and parametric (exponential and Weibull distributions) methodologies are considered. A simulation study is conducted to evaluate the performance of the proposed estimator and the jackknife resampling method is used to estimate the variance of such estimator. An application to a real data set is also included.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
Valuation of projects for the preservation of water resources provides important information to policy makers and funding institutions. Standard contingent valuation models rely on distributional assumptions to provide welfare measures. Deviations from assumed and actual distribution of benefits are important when designing policies in developing countries, where inequality is a concern. This article applies semiparametric methods to obtain estimates of the benefit from a project for the preservation of an important Brazilian river basin. These estimates lead to significant differences from those obtained using the standard parametric approach.
Resumo:
Scototaxis, the preference for dark environments in detriment of bright ones, is an index of anxiety in zebrafish. In this work, we analyzed avoidance of the white compartment by analysis of the spatiotemporal pattern of exploratory behavior (time spent in the white compartment of the apparatus and shuttle frequency between compartments) and swimming ethogram (thigmotaxis, freezing and burst swimming in the white compartment) in four experiments. In Experiment 1, we demonstrate that spatiotemporal measures of white avoidance and locomotion do not habituate during a single 15-min session. In Experiments 2 and 3, we demonstrate that locomotor activity habituates to repeated exposures to the apparatus, regardless of whether inter-trial interval is 15-min or 24-h; however, no habituation of white avoidance was observed in either experiment. In Experiment 4, we confined animals for three 15-min sessions in the white compartment prior to recording spatiotemporal and ethogram measures in a standard preference test. After these forced exposures, white avoidance and locomotor activity showed no differences in relation to non-confined animals, but burst swimming, thigmotaxis and freezing in the white compartment were all decreased. These results suggest that neither avoidance of the white compartment nor approach to the black compartment account for the behavior of zebrafish in the scototaxis test. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved
Resumo:
The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.
Resumo:
Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F) were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05) and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm), mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm), polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm) and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm). All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.
Resumo:
Na primeira semana de maio de 2008, durante quatro dias, um ciclone em superfície permaneceu semi-estacionário na costa da região sul do Brasil. Este sistema foi responsável por chuvas e ventos fortes no Rio Grande do Sul e Santa Catarina, os quais causaram muitos danos (queda de árvores, enchentes e desabamentos). O objetivo deste trabalho é avaliar o processo de formação e entender os mecanismos responsáveis pelo lento deslocamento do ciclone, já que a maioria dos ciclones nesta região possui deslocamento mais rápido. A equação de desenvolvimento de Sutcliffe mostrou que a advecção de vorticidade absoluta ciclônica na média troposfera e a advecção de ar quente na camada entre 1000-500 hPa foram mecanismos importantes para a ciclogênese. Neste período, o intenso aquecimento diabático também contribuiu para a ciclogênese, à medida que se contrapôs ao intenso resfriamento adiabático devido aos movimentos verticais ascendentes. A advecção de vorticidade absoluta ciclônica que favoreceu a ciclogênese esteve associada a um Vórtice Ciclônico em Altos Níveis (VCAN), que se formou numa região de anomalia de vorticidade potencial. O VCAN se manteve semi-estacionário e compôs o setor norte de um bloqueio do tipo dipolo. Tal bloqueio intensificou um anticiclone em superfície, situado a sul/leste do ciclone, o que contribuiu para o ciclone se manter semi-estacionário. O movimento atípico e lento do ciclone para sul, e em alguns períodos para sudoeste, esteve associado com advecções de vorticidade absoluta ciclônica na média troposfera e de ar quente no seu setor sul. Somente quando o bloqueio em níveis médios e a anomalia de vorticidade potencial em níveis médios/altos se enfraqueceram, o ciclone em superfície se afastou da costa sul do Brasil.
Resumo:
É apresentado um estudo sobre sistemas convectivos linearmente organizados e observados por um radar meteorológico banda-C na região semi-árida do Nordeste do Brasil. São analisados três dias (27 a 29) de março de 1985, com ênfase na investigação do papel desempenhado por fatores locais e de grande escala no desenvolvimento dos sistemas. No cenário de grande escala, a área de cobertura do radar foi influenciada por um cavado de ar superior austral no dia 27 e por um vórtice ciclônico de altos níveis no dia 29. A convergência de umidade próxima à superfície favoreceu a atividade convectiva nos dias 27 e 29, enquanto que divergência de umidade próxima à superfície inibiu a atividade convectiva no dia 28. No cenário de mesoescala, foi observado que o aquecimento diurno é um fator importante para a formação de células convectivas, somando-se a ele o papel determinante da orografia na localização dos ecos. De maneira geral, as imagens de radar mostram os sistemas convectivos linearmente organizados em áreas elevadas e núcleos convectivos intensos envolvidos por uma área de precipitação estratiforme. Os resultados indicam que convergência do fluxo de umidade em grande escala e aquecimento radiativo, são fatores determinantes na evolução e desenvolvimento dos ecos na área de estudo.