46 resultados para Application to model approach
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This article is dedicated to harmonic wavelet Galerkin methods for the solution of partial differential equations. Several variants of the method are proposed and analyzed, using the Burgers equation as a test model. The computational complexity can be reduced when the localization properties of the wavelets and restricted interactions between different scales are exploited. The resulting variants of the method have computational complexities ranging from O(N(3)) to O(N) (N being the space dimension) per time step. A pseudo-spectral wavelet scheme is also described and compared to the methods based on connection coefficients. The harmonic wavelet Galerkin scheme is applied to a nonlinear model for the propagation of precipitation fronts, with the front locations being exposed in the sizes of the localized wavelet coefficients. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
We address here aspects of the implementation of a memory evolutive system (MES), based on the model proposed by A. Ehresmann and J. Vanbremeersch (2007), by means of a simulated network of spiking neurons with time dependent plasticity. We point out the advantages and challenges of applying category theory for the representation of cognition, by using the MES architecture. Then we discuss the issues concerning the minimum requirements that an artificial neural network (ANN) should fulfill in order that it would be capable of expressing the categories and mappings between them, underlying the MES. We conclude that a pulsed ANN based on Izhikevich`s formal neuron with STDP (spike time-dependent plasticity) has sufficient dynamical properties to achieve these requirements, provided it can cope with the topological requirements. Finally, we present some perspectives of future research concerning the proposed ANN topology.
Resumo:
For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.
Resumo:
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.
Resumo:
Tramadol (T) is available as a racemic mixture of (+)-trans-T and (-)-trans-T. The main metabolic pathways are O-demethylation and N-demethylation, producing trans-O-desmethyltramadol (M1) and trans-N-desmethyltramadol (M2) enantiomers, respectively. The analgesic effect of T is related to the opioid activity of (+)-trans-T and (+)-M1 and to the monoaminergic action of (+/-)-trans-T. This is the first study using tandem mass spectrometry as a detection system for the simultaneous analysis of trans-T, M1, and M2 enantiomers. The analytes were resolved on a Chiralpak (R) AD column using hexane: ethanol (95.5:4.5, v/v) plus 0.1% diethylamine as the mobile phase. The quantitation limits were 0.5 ng/ml for trans-T and M1 and 0.1 ng/ml for M2. The method developed and validated here was applied to a pharmacokinetic study in rats. Male Wistar rats (n = 6 at each time point) received a single oral dose of 20 mg/kg racemic trans-T. Blood samples were collected up to 12 h after drug administration. The kinetic disposition of trans-T and M2 was enantioselective (AUC((+)/(-)) ratio = 4.16 and 6.36, respectively). The direction and extent of enantioselectivity in the pharmacokinetics of trans-T and M2 in rats were comparable to data previously reported for healthy volunteers, suggesting that rats are a suitable model for enantioselective studies of trans-T pharmacokinetics. Chirality 23: 287-293, 2011. (C) 2010 Wiley-Liss, Inc.
Resumo:
This article attempts to elucidate one of the mechanisms that link trade barriers, in the form of port costs, and subsequent growth and regional inequality. Prior attention has focused on inland or link costs, but port costs can be considered as a further barrier to enhancing trade liberalization and growth. In contrast to a highway link, congestion at a port may have severe impacts that are spread over space and time whereas highway link congestion may be resolved within several hours. Since a port is part of the transportation network, any congestion/disruption is likely to ripple throughout the hinterland. In this sense, it is important to model properly the role nodal components play in the context of spatial models and international trade. In this article, a spatial computable general equilibrium (CGE) model that is integrated to a transport network system is presented to simulate the impacts of increases in port efficiency in Brazil. The role of ports of entry and ports of exit are explicitly considered to grasp the holistic picture in an integrated interregional system. Measures of efficiency for different port locations are incorporated in the calibration of the model and used as the benchmark in our simulations. Three scenarios are evaluated: (1) an overall increase in port efficiency in Brazil to achieve international standards; (2) efficiency gains associated with decentralization in port management in Brazil; and (3) regionally differentiated increases in port efficiency to reach the boundary of the national efficiency frontier.
Resumo:
Prediction of carbohydrate fractions using equations from the Cornell Net Carbohydrate and Protein System (CNCPS) is a valuable tool to assess the nutritional value of forages. In this paper these carbohydrate fractions were predicted using data from three sunflower (Helianthus annuus L.) cultivars, fresh or as silage. The CNCPS equations for fractions B(2) and C include measurement of ash and protein-free neutral detergent fibre (NDF) as one of their components. However, NDF lacks pectin and other non-starch polysaccharides that are found in the cell wall (CW) matrix, so this work compared the use of a crude CW preparation instead of NDF in the CNCPS equations. There were no differences in the estimates of fractions B, and C when CW replaced NDF; however there were differences in fractions A and B2. Some of the CNCPS equations could be simplified when using CW instead of NDF Notably, lignin could be expressed as a proportion of DM, rather than on the basis of ash and protein-free NDF, when predicting CNCPS fraction C. The CNCPS fraction B(1) (starch + pectin) values were lower than pectin determined through wet chemistty. This finding, along with the results obtained by the substitution of CW for NDF in the CNCPS equations, suggests that pectin was not part of fraction B(1) but present in fraction A. We suggest that pectin and other non-starch polysaccharides that are dissolved by the neutral detergent solution be allocated to a specific fraction (B2) and that another fraction (B(3)) be adopted for the digestible cell wall carbohydrates.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell Poisson distribution. This model includes as special cases some of the well-known cure rate models discussed in the literature. Next, we discuss the maximum likelihood estimation of the parameters of this cure rate survival model. Finally, we illustrate the usefulness of this model by applying it to a real cutaneous melanoma data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.
Resumo:
Differently from theoretical scale-free networks, most real networks present multi-scale behavior, with nodes structured in different types of functional groups and communities. While the majority of approaches for classification of nodes in a complex network has relied on local measurements of the topology/connectivity around each node, valuable information about node functionality can be obtained by concentric (or hierarchical) measurements. This paper extends previous methodologies based on concentric measurements, by studying the possibility of using agglomerative clustering methods, in order to obtain a set of functional groups of nodes, considering particular institutional collaboration network nodes, including various known communities (departments of the University of Sao Paulo). Among the interesting obtained findings, we emphasize the scale-free nature of the network obtained, as well as identification of different patterns of authorship emerging from different areas (e.g. human and exact sciences). Another interesting result concerns the relatively uniform distribution of hubs along concentric levels, contrariwise to the non-uniform pattern found in theoretical scale-free networks such as the BA model. (C) 2008 Elsevier B.V. All rights reserved.