8 resultados para Superiority

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The saccadic paradigm has been used to investigate specific cortical networks involving visuospatial attention. We examined whether asymmetry in theta and beta band differentiates the role of the hemispheres during the execution of two different prosacadic conditions: a fixed condition, where the stimulus was presented at the same location; and a random condition, where the stimulus was unpredictable. Twelve healthy volunteers (3 male; mean age: 26.25) performed the task while their brain activity pattern was recorded using quantitative electroencephalography. We did not find any significant difference for beta, slow- and fast-alpha frequencies for the pairs of electrodes analyzed. The results for theta band showed a superiority of the left hemisphere in the frontal region when responding to the random condition on the right, which is related to the planning and selection of responses, and also a greater activation of the right hemisphere during the random condition, in the occipital region, related to the identification and recognition of patterns. These results indicate that asymmetries in the premotor area and the occipital cortex differentiate memory- and stimulus-driven tasks. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Several implant surfaces are being developed, some in the nanoscale level. In this study, two different surfaces had their early healing properties compared in context of circumferential defects of various widths. Material and methods Six dogs had the mandibular premolars extracted. After 8weeks, four implants were placed equicrestally in each side. One acted as control, while the others were inserted into sites with circumferential defects of 1.0, 1.5 and 2.0mm wide and 5mm deep. A nano-modified surface was used on one side and a micro-rough on the other. Bone markers were administered on the third day after implant placement and then after 1, 2, 4weeks to investigate the bone formation dynamic through fluorescence analysis. Ground sections were prepared from 8-week healing biopsies and histomorphometry was performed. Results The fluorescence evaluation of the early healing showed numerically better results for the nano-modified group; however this trend was not followed by the histomorphometric evaluation. A non-significant numerical superiority of the micro-rough group was observed in terms of vertical bone apposition, defect bone fill, bone-to-implant contact and bone density. In the intra-group analysis, the wider defects showed the worse results while the control sites showed the best results for the different parameters, but without statistical relevance. Conclusion Both surfaces may lead to complete fill of circumferential defects, but the gap width has to be considered as a challenge. The nano-scale modification was beneficial in the early stages of bone healing, but the micro-rough surface showed numerical better outcomes at the 8-week final period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The optimal revascularization strategy for diabetic patients with multivessel coronary artery disease (MVD) remains uncertain for lack of an adequately powered, randomized trial. The FREEDOM trial was designed to compare contemporary coronary artery bypass grafting (CABG) to percutaneous coronary intervention (PCI) with drug-eluting stents in diabetic patients with MVD against a background of optimal medical therapy. Methods A total of 1,900 diabetic participants with MVD were randomized to PCI or CABG worldwide from April 2005 to March 2010. FREEDOM is a superiority trial with a mean follow-up of 4.37 years (minimum 2 years) and 80% power to detect a 27.0% relative reduction. We present the baseline characteristics of patients screened and randomized, and provide a comparison with other MVD trials involving diabetic patients. Results The randomized cohort was 63.1 +/- 9.1 years old and 29% female, with a median diabetes duration of 10.2 +/- 8.9 years. Most (83%) had 3-vessel disease and on average took 5.5 +/- 1.7 vascular medications, with 32% on insulin therapy. Nearly all had hypertension and/or dyslipidemia, and 26% had a prior myocardial infarction. Mean hemoglobin A1c was 7.8 +/- 1.7 mg/dL, 29% had low-density lipoprotein <70 mg/dL, and mean systolic blood pressure was 134 +/- 20 mm Hg. The mean SYNTAX score was 26.2 with a symmetric distribution. FREEDOM trial participants have baseline characteristics similar to those of contemporary multivessel and diabetes trial cohorts. Conclusions The FREEDOM trial has successfully recruited a high-risk diabetic MVD cohort. Follow-up efforts include aggressive monitoring to optimize background risk factor control. FREEDOM will contribute significantly to the PCI versus CABG debate in diabetic patients with MVD. (Am Heart J 2012;164:591-9.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The autoregressive (AR) estimator, a non-parametric method, is used to analyze functional magnetic resonance imaging (fMRI) data. The same method has been used, with success, in several other time series data analysis. It uses exclusively the available experimental data points to estimate the most plausible power spectra compatible with the experimental data and there is no need to make any assumption about non-measured points. The time series, obtained from fMRI block paradigm data, is analyzed by the AR method to determine the brain active regions involved in the processing of a given stimulus. This method is considerably more reliable than the fast Fourier transform or the parametric methods. The time series corresponding to each image pixel is analyzed using the AR estimator and the corresponding poles are obtained. The pole distribution gives the shape of power spectra, and the pixels with poles at the stimulation frequency are considered as the active regions. The method was applied in simulated and real data, its superiority is shown by the receiver operating characteristic curves which were obtained using the simulated data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n(-1/2), n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property. The power performance of all four criteria in one-parameter exponential family is examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.