936 resultados para errors-in-variables model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forty Cryptococcus gattii strains were submitted to antifungal susceptibility testing with fluconazole, itraconazole, amphotericin B and terbinafine. The minimum inhibitory concentration (MIC) ranges were 0.5-64.0 for fluconazole, < 0.015-0.25 for itraconazole, 0.015-0.5 for amphotericin B and 0.062-2.0 for terbinafine. A bioassay for the quantitation of fluconazole in murine brain tissue was developed. Swiss mice received daily injections of the antifungal, and their brains were withdrawn at different times over the 14-day study period. The drug concentrations varied from 12.98 to 44.60 mu g/mL. This assay was used to evaluate the therapy with fluconazole in a model of infection caused by C. gattii. Swiss mice were infected intracranially and treated with fluconazole for 7, 10 or 14 days. The treatment reduced the fungal burden, but an increase in fungal growth was observed on day 14. The MIC for fluconazole against sequential isolates was 16 mu g/mL, except for the isolates obtained from animals treated for 14 days (MIC = 64 mu g/mL). The quantitation of cytokines revealed a predominance of IFN-gamma and IL-12 in the non-treated group and elevation of IL-4 and IL-10 in the treated group. Our data revealed the possibility of acquired resistance during the antifungal drug therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Skew-normal distribution is a class of distributions that includes the normal distributions as a special case. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis in a multivariate, null intercept, measurement error model [R. Aoki, H. Bolfarine, J.A. Achcar, and D. Leao Pinto Jr, Bayesian analysis of a multivariate null intercept error-in -variables regression model, J. Biopharm. Stat. 13(4) (2003b), pp. 763-771] where the unobserved value of the covariate (latent variable) follows a skew-normal distribution. The results and methods are applied to a real dental clinical trial presented in [A. Hadgu and G. Koch, Application of generalized estimating equations to a dental randomized clinical trial, J. Biopharm. Stat. 9 (1999), pp. 161-178].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To investigate the short-term effects of exposure to particulate matter from biomass burning in the Amazon on the daily demand for outpatient care due to respiratory diseases in children and the elderly. Methods. Epidemiologic study with ecologic time series design. Daily consultation records were obtained from the 14 primary health care clinics in the municipality of Alta Floresta, state of Mato Grosso, in the southern region of the Brazilian Amazon, between January 2004 and December 2005. Information on the daily levels of fine particulate matter was made available by the Brazilian National Institute for Spatial Research. To control for confounding factors ( situations in which a non-causal association between exposure and disease is observed due to a third variable), variables related to time trends, seasonality, temperature, relative humidity, rainfall, and calendar effects ( such as occurrence of holidays and weekends) were included in the model. Poisson regression with generalized additive models was used. Results. A 10 mu g/m(3) increase in the level of exposure to particulate matter was associated with increases of 2.9% and 2.6% in outpatient consultations due to respiratory diseases in children on the 6th and 7th days following exposure. Significant associations were not observed for elderly individuals. Conclusions. The results suggest that the levels of particulate matter from biomass burning in the Amazon are associated with adverse effects on the respiratory health of children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A previously proposed model describing the trapping site of the interstitial atomic hydrogen in borate glasses is analyzed. In this model the atomic hydrogen is stabilized at the centers of oxygen polygons belonging to B-O ring structures in the glass network by van der Waals forces. The previously reported atomic hydrogen isothermal decay experimental data are discussed in the light of this microscopic model. A coupled differential equation system of the observed decay kinetics was solved numerically using the Runge Kutta method. The experimental untrapping activation energy of 0.7 x 10(-19) J is in good agreement with the calculated results of dispersion interaction between the stabilized atomic hydrogen and the neighboring oxygen atoms at the vertices of hexagonal ring structures. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hajnal and Juhasz proved that under CH there is a hereditarily separable, hereditarily normal topological group without non-trivial convergent sequences that is countably compact and not Lindelof. The example constructed is a topological subgroup H subset of 2(omega 1) that is an HFD with the following property (P) the projection of H onto every partial product 2(I) for I is an element of vertical bar omega(1)vertical bar(omega) is onto. Any such group has the necessary properties. We prove that if kappa is a cardinal of uncountable cofinality, then in the model obtained by forcing over a model of CH with the measure algebra on 2(kappa), there is an HFD topological group in 2(omega 1) which has property (P). Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After declining steadily for several decades, the South China tiger (Panthera tigris amoyensis) is now thought to be extinct in the wild. However, there is some hope of reintroduction, with Hupingshan-Houhe and Mangshan-Nanling National Nature Reserves in southern China seeming to hold the most promise. Our study used slope, elevation, vegetation, and landcover variables to construct a rough habitat suitability index for tigers in these two parks. According to our model, there are areas of suitable habitat within both parks. However, there are some important variables that we were unable to include in our model, such as human population density and prey availability. Considerable in-depth research will be necessary to evaluate the suitability of these locations before reintroduction is considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report results on the optimal \choice of technique" in a model originally formulated by Robinson, Solow and Srinivasan (henceforth, the RSS model) and further discussed by Okishio and Stiglitz. By viewing this vintage-capital model without discounting as a speci c instance of the general theory of intertemporal resource allocation associated with Brock, Gale and McKenzie, we resolve longstanding conjectures in the form of theorems on the existence and price support of optimal paths, and of conditions suÆcient for the optimality of a policy rst identi ed by Stiglitz. We dispose of the necessity of these conditions in surprisingly simple examples of economies in which (i) an optimal path is periodic, (ii) a path following Stiglitz' policy is bad, and (iii) there is optimal investment in di erent vintages at di erent times. (129 words)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On using McKenzie’s taxonomy of optimal accumulation in the longrun, we report a “uniform turnpike” theorem of the third kind in a model original to Robinson, Solow and Srinivasan (RSS), and further studied by Stiglitz. Our results are presented in the undiscounted, discrete-time setting emphasized in the recent work of Khan-Mitra, and they rely on the importance of strictly concave felicity functions, or alternatively, on the value of a “marginal rate of transformation”, ξσ, from one period to the next not being unity. Our results, despite their specificity, contribute to the methodology of intertemporal optimization theory, as developed in economics by Ramsey, von Neumann and their followers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the electricity load demand behavior during the 2001 rationing period, which was implemented because of the Brazilian energetic crisis. The hourly data refers to a utility situated in the southeast of the country. We use the model proposed by Soares and Souza (2003), making use of generalized long memory to model the seasonal behavior of the load. The rationing period is shown to have imposed a structural break in the series, decreasing the load at about 20%. Even so, the forecast accuracy is decreased only marginally, and the forecasts rapidly readapt to the new situation. The forecast errors from this model also permit verifying the public response to pieces of information released regarding the crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article studies the welfare and long run allocation impacts of privatization. There are two types of capital in this model economy, one private and the other initially public (“infrastructure”). A positive externality due to infrastructure capital is assumed, so that the government could improve upon decentralized allocations internalizing the externality, but public investmentis …nanced through distortionary taxation. It is shown that privatization is welfare-improving for a large set of economies and that after privatization under-investment is optimal. When operation inefficiency in the public sectoror subsidy to infrastructure accumulation are introduced, gains from privatization are higherand positive for most reasonable combinations of parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are four different hypotheses analyzed in the literature that explain deunionization, namely: the decrease in the demand for union representation by the workers; the impaet of globalization over unionization rates; teehnieal ehange and ehanges in the legal and politieal systems against unions. This paper aims to test alI ofthem. We estimate a logistie regression using panel data proeedure with 35 industries from 1973 to 1999 and eonclude that the four hypotheses ean not be rejeeted by the data. We also use a varianee analysis deeomposition to study the impaet of these variables over the drop in unionization rates. In the model with no demographic variables the results show that these economic (tested) variables can account from 10% to 12% of the drop in unionization. However, when we include demographic variables these tested variables can account from 10% to 35% in the total variation of unionization rates. In this case the four hypotheses tested can explain up to 50% ofthe total drop in unionization rates explained by the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.