947 resultados para Random coefficient logit (RCL) model
Resumo:
A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Objectives. The C-Factor has been used widely to rationalize the changes in shrinkage stress occurring at the tooth/resin-composite interfaces. Experimentally, such stresses have been measured in a uniaxial direction between opposed parallel walls. The situation of adjoining cavity walls has been neglected. The aim was to investigate the hypothesis that: within stylized model rectangular cavities of constant volume and wall thickness, the interfacial shrinkage-stress at the adjoining cavity walls increases steadily as the C-Factor increases. Methods. Eight 3D-FEM restored Class I 'rectangular cavity' models were created by MSC.PATRAN/MSC.Marc, r2-2005 and subjected to 1% of shrinkage, while maintaining constant both the volume (20 mm(3)) and the wall thickness (2 mm), but varying the C-Factor (1.9-13.5). An adhesive contact between the composite and the teeth was incorporated. Polymerization shrinkage was simulated by analogy with thermal contraction. Principal stresses and strains were calculated. Peak values of maximum principal (MP) and maximum shear (MS) stresses from the different walls were displayed graphically as a function of C-Factor. The stress-peak association with C-Factor was evaluated by the Pearson correlation between the stress peak and the C-Factor. Results. The hypothesis was rejected: there was no clear increase of stress-peaks with C-Factor. The stress-peaks particularly expressed as MP and MS varied only slightly with increasing C-Factor. Lower stress-peaks were present at the pulpal floor in comparison to the stress at the axial walls. In general, MP and MS were similar when the axial wall dimensions were similar. The Pearson coefficient only expressed associations for the maximum principal stress at the ZX wall and the Z axis. Significance. Increase of the C-Factor did not lead to increase of the calculated stress-peaks in model rectangular Class I cavity walls. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
In the clinical setting, the early detection of myocardial injury induced by doxorubicin (DXR) is still considered a challenge. To assess whether ultrasonic tissue characterization (UTC) can identify early DXR-related myocardial lesions and their correlation with collagen myocardial percentages, we studied 60 rats at basal status and prospectively after 2mg/Kg/week DXR endovenous infusion. Echocardiographic examinations were conducted at baseline and at 8,10,12,14 and 16 mg/Kg DXR cumulative dose. The left ventricle ejection fraction (LVEF), shortening fraction (SF), and the UTC indices: corrected coefficient of integrated backscatter (IBS) (tissue IBS intensity/phantom IBS intensity) (CC-IBS) and the cyclic variation magnitude of this intensity curve (MCV) were measured. The variation of each parameter of study through DXR dose was expressed by the average and standard error at specific DXR dosages and those at baseline. The collagen percent (%) was calculated in six control group animals and 24 DXR group animals. CC-IBS increased (1.29 +/- 0.27 x 1.1 +/- 0.26-basal; p=0.005) and MCV decreased (9.1 +/- 2.8 x 11.02 +/- 2.6-basal; p=0.006) from 8 mg/Kg to 16mg/Kg DXR. LVEF presented only a slight but significant decrease (80.4 +/- 6.9% x 85.3 +/- 6.9%-basal, p=0.005) from 8 mg/Kg to 16 mg/Kg DXR. CC-IBS was 72.2% sensitive and 83.3% specific to detect collagen deposition of 4.24%(AUC=0.76). LVEF was not accurate to detect initial collagen deposition (AUC=0.54). In conclusion: UTC was able to early identify the DXR myocardial lesion when compared to LVEF, showing good accuracy to detect the initial collagen deposition in this experimental animal model.
Resumo:
This paper sets forth a Neo-Kaleckian model of capacity utilization and growth with distribution featuring a profit-sharing arrangement. While a given proportion of firms compensate workers with only a base wage, the remaining proportion do so with a base wage and a share of profits. Consistent with the empirical evidence, workers hired by profit-sharing firms have a higher productivity than their counterparts in base-wage firms. While a higher profit-sharing coefficient raises capacity utilization and growth irrespective of the distribution of compensation strategies across firms, a higher frequency of profit-sharing firms does likewise only if the profit-sharing coefficient is sufficiently high.
Resumo:
We extend the random permutation model to obtain the best linear unbiased estimator of a finite population mean accounting for auxiliary variables under simple random sampling without replacement (SRS) or stratified SRS. The proposed method provides a systematic design-based justification for well-known results involving common estimators derived under minimal assumptions that do not require specification of a functional relationship between the response and the auxiliary variables.
Resumo:
In this work, we present a supersymmetric extension of the quantum spherical model, both in components and also in the superspace formalisms. We find the solution for short- and long-range interactions through the imaginary time formalism path integral approach. The existence of critical points (classical and quantum) is analyzed and the corresponding critical dimensions are determined.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
Objective: This study aimed to investigate the effect of 830 and 670 nm diode laser on the viability of random skin flaps in rats. Background data: Low-level laser therapy (LLLT) has been reported to be successful in stimulating the formation of new blood vessels and reducing the inflammatory process after injury. However, the efficiency of such treatment remains uncertain, and there is also some controversy regarding the efficacy of different wavelengths currently on the market. Materials and methods: Thirty Wistar rats were used and divided into three groups, with 10 rats in each. A random skin flap was raised on the dorsum of each animal. Group 1 was the control group, group 2 received 830 nm laser radiations, and group 3 was submitted to 670 nm laser radiation (power density = 0.5 mW/cm(2)). The animals underwent laser therapy with 36 J/cm(2) energy density (total energy = 2.52 J and 72 sec per session) immediately after surgery and on the 4 subsequent days. The application site of laser radiation was one point at 2.5 cm from the flap's cranial base. The percentage of skin flap necrosis area was calculated on the 7th postoperative day using the paper template method. A skin sample was collected immediately after to determine the vascular endothelial growth factor (VEGF) expression and the epidermal cell proliferation index (KiD67). Results: Statistically significant differences were found among the percentages of necrosis, with higher values observed in group 1 compared with groups 2 and 3. No statistically significant differences were found among these groups using the paper template method. Group 3 presented the highest mean number of blood vessels expressing VEGF and of cells in the proliferative phase when compared with groups 1 and 2. Conclusions: LLLT was effective in increasing random skin flap viability in rats. The 670 nm laser presented more satisfactory results than the 830 nm laser.
Resumo:
The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabasi-Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists.
Resumo:
We investigate the nonequilibrium roughening transition of a one-dimensional restricted solid-on-solid model by directly sampling the stationary probability density of a suitable order parameter as the surface adsorption rate varies. The shapes of the probability density histograms suggest a typical Ginzburg-Landau scenario for the phase transition of the model, and estimates of the "magnetic" exponent seem to confirm its mean-field critical behavior. We also found that the flipping times between the metastable phases of the model scale exponentially with the system size, signaling the breaking of ergodicity in the thermodynamic limit. Incidentally, we discovered that a closely related model not considered before also displays a phase transition with the same critical behavior as the original model. Our results support the usefulness of off-critical histogram techniques in the investigation of nonequilibrium phase transitions. We also briefly discuss in the appendix a good and simple pseudo-random number generator used in our simulations.
Resumo:
A neural network model to predict ozone concentration in the Sao Paulo Metropolitan Area was developed, based on average values of meteorological variables in the morning (8:00-12:00 hr) and afternoon (13:00-17: 00 hr) periods. Outputs are the maximum and average ozone concentrations in the afternoon (12:00-17:00 hr). The correlation coefficient between computed and measured values was 0.82 and 0.88 for the maximum and average ozone concentration, respectively. The model presented good performance as a prediction tool for the maximum ozone concentration. For prediction periods from 1 to 5 days 0 to 23% failures (95% confidence) were obtained.
Resumo:
The fractioning of lemon essential oil can be performed by liquid-liquid extraction using hydrous ethanol as a solvent. A quaternary mixture composed of limonene, gamma-terpinene, beta-pinene, and citral was used to simulate lemon essential oil. In this paper, we present (liquid + liquid) equilibrium data that were experimentally determined for systems containing essential oil compounds, ethanol, and water at T = 298.2 K. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were less than 0.0053 in all systems, indicating the accuracy of these molecular models in describing our systems. The results show that as the water content in the solvent phase increased, the values of the distribution coefficients decreased, regardless of the type of compound studied. However, the oxygenated compound always showed the highest distribution coefficient among the components of the essential oil, thus making deterpenation of the lemon essential oil a feasible process. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The ground-state phase diagram of an Ising spin-glass model on a random graph with an arbitrary fraction w of ferromagnetic interactions is analysed in the presence of an external field. Using the replica method, and performing an analysis of stability of the replica-symmetric solution, it is shown that w = 1/2, corresponding to an unbiased spin glass, is a singular point in the phase diagram, separating a region with a spin-glass phase (w < 1/2) from a region with spin-glass, ferromagnetic, mixed and paramagnetic phases (w > 1/2).
Resumo:
Polynomial Chaos Expansion (PCE) is widely recognized as a flexible tool to represent different types of random variables/processes. However, applications to real, experimental data are still limited. In this article, PCE is used to represent the random time-evolution of metal corrosion growth in marine environments. The PCE coefficients are determined in order to represent data of 45 corrosion coupons tested by Jeffrey and Melchers (2001) at Taylors Beach, Australia. Accuracy of the representation and possibilities for model extrapolation are considered in the study. Results show that reasonably accurate smooth representations of the corrosion process can be obtained. The representation is not better because a smooth model is used to represent non-smooth corrosion data. Random corrosion leads to time-variant reliability problems, due to resistance degradation over time. Time variant reliability problems are not trivial to solve, especially under random process loading. Two example problems are solved herein, showing how the developed PCE representations can be employed in reliability analysis of structures subject to marine corrosion. Monte Carlo Simulation is used to solve the resulting time-variant reliability problems. However, an accurate and more computationally efficient solution is also presented.