939 resultados para General linear models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. METHODS: To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. RESULTS: Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. CONCLUSIONS: If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper estimates Bejarano and Charry (2014)’s small open economy with financial frictions model for the Colombian economy using Bayesian estimation techniques. Additionally, I compute the welfare gains of implementing an optimal response to credit spreads into an augmented Taylor rule. The main result is that a reaction to credit spreads does not imply significant welfare gains unless the economic disturbances increases its volatility, like the disruption implied by a financial crisis. Otherwise its impact over the macroeconomic variables is null.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Species occurrence and abundance models are important tools that can be used in biodiversity conservation, and can be applied to predict or plan actions needed to mitigate the environmental impacts of hydropower dams. In this study our objectives were: (i) to model the occurrence and abundance of threatened plant species, (ii) to verify the relationship between predicted occurrence and true abundance, and (iii) to assess whether models based on abundance are more effective in predicting species occurrence than those based on presence–absence data. Individual representatives of nine species were counted within 388 randomly georeferenced plots (10 m × 50 m) around the Barra Grande hydropower dam reservoir in southern Brazil. We modelled their relationship with 15 environmental variables using both occurrence (Generalised Linear Models) and abundance data (Hurdle and Zero-Inflated models). Overall, occurrence models were more accurate than abundance models. For all species, observed abundance was significantly, although not strongly, correlated with the probability of occurrence. This correlation lost significance when zero-abundance (absence) sites were excluded from analysis, but only when this entailed a substantial drop in sample size. The same occurred when analysing relationships between abundance and probability of occurrence from previously published studies on a range of different species, suggesting that future studies could potentially use probability of occurrence as an approximate indicator of abundance when the latter is not possible to obtain. This possibility might, however, depend on life history traits of the species in question, with some traits favouring a relationship between occurrence and abundance. Reconstructing species abundance patterns from occurrence could be an important tool for conservation planning and the management of threatened species, allowing scientists to indicate the best areas for collection and reintroduction of plant germplasm or choose conservation areas most likely to maintain viable populations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Species distribution and ecological niche models are increasingly used in biodiversity management and conservation. However, one thing that is important but rarely done is to follow up on the predictive performance of these models over time, to check if their predictions are fulfilled and maintain accuracy, or if they apply only to the set in which they were produced. In 2003, a distribution model of the Eurasian otter (Lutra lutra) in Spain was published, based on the results of a country-wide otter survey published in 1998. This model was built with logistic regression of otter presence-absence in UTM 10 km2 cells on a diverse set of environmental, human and spatial variables, selected according to statistical criteria. Here we evaluate this model against the results of the most recent otter survey, carried out a decade later and after a significant expansion of the otter distribution area in this country. Despite the time elapsed and the evident changes in this species’ distribution, the model maintained a good predictive capacity, considering both discrimination and calibration measures. Otter distribution did not expand randomly or simply towards vicinity areas,m but specifically towards the areas predicted as most favourable by the model based on data from 10 years before. This corroborates the utility of predictive distribution models, at least in the medium term and when they are made with robust methods and relevant predictor variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding the natural and forced variability of the atmospheric general circulation and its drivers is one of the grand challenges in climate science. It is of paramount importance to understand to what extent the systematic error of climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is increased or decreased to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in Coupled Model Intercomparison Project (CMIP6) models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response to ENSO is more meridionally oriented when the Pacific jet stream is weaker and more zonally oriented with a stronger jet. Rossby wave linear theory suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a zonal propagation of Rossby waves. The shape of the dynamical response to ENSO affects the ENSO impacts on surface temperature and precipitation over Central and North America. A comparison of the SPEEDY results with CMIP6 models suggests a wider applicability of the results to more resources-demanding climate general circulation models (GCMs), opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accurate representation of the Earth Radiation Budget by General Circulation Models (GCMs) is a fundamental requirement to provide reliable historical and future climate simulations. In this study, we found reasonable agreement between the integrated energy fluxes at the top of the atmosphere simulated by 34 state-of-the-art climate models and the observations provided by the Cloud and Earth Radiant Energy System (CERES) mission on a global scale, but large regional biases have been detected throughout the globe. Furthermore, we highlighted that a good agreement between simulated and observed integrated Outgoing Longwave Radiation (OLR) fluxes may be obtained from the cancellation of opposite-in-sign systematic errors, localized in different spectral ranges. To avoid this and to understand the causes of these biases, we compared the observed Earth emission spectra, measured by the Infrared Atmospheric Sounding Interferometer (IASI) in the period 2008-2016, with the synthetic radiances computed on the basis of the atmospheric fields provided by the EC-Earth GCM. To this purpose, the fast σ-IASI radiative transfer model was used, after its validation and implementation in EC-Earth. From the comparison between observed and simulated spectral radiances, a positive temperature bias in the stratosphere and a negative temperature bias in the middle troposphere, as well as a dry bias of the water vapor concentration in the upper troposphere, have been identified in the EC-Earth climate model. The analysis has been performed in clear-sky conditions, but the feasibility of its extension in the presence of clouds, whose impact on the radiation represents the greatest source of uncertainty in climate models, has also been proven. Finally, the analysis of simulated and observed OLR trends indicated good agreement and provided detailed information on the spectral fingerprints of the evolution of the main climate variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is an ecological, analytical and retrospective study comprising the 645 municipalities in the State of São Paulo, the scope of which was to determine the relationship between socioeconomic, demographic variables and the model of care in relation to infant mortality rates in the period from 1998 to 2008. The ratio of average annual change for each indicator per stratum coverage was calculated. Infant mortality was analyzed according to the model for repeated measures over time, adjusted for the following correction variables: the city's population, proportion of Family Health Programs (PSFs) deployed, proportion of Growth Acceleration Programs (PACs) deployed, per capita GDP and SPSRI (São Paulo social responsibility index). The analysis was performed by generalized linear models, considering the gamma distribution. Multiple comparisons were performed with the likelihood ratio with chi-square approximate distribution, considering a significance level of 5%. There was a decrease in infant mortality over the years (p < 0.05), with no significant difference from 2004 to 2008 (p > 0.05). The proportion of PSFs deployed (p < 0.0001) and per capita GDP (p < 0.0001) were significant in the model. The decline of infant mortality in this period was influenced by the growth of per capita GDP and PSFs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resistant hypertension (RHTN) includes patients with controlled blood pressure (BP) (CRHTN) and uncontrolled BP (UCRHTN). In fact, RHTN patients are more likely to have target organ damage (TOD), and resistin, leptin and adiponectin may affect BP control in these subjects. We assessed the relationship between adipokines levels and arterial stiffness, left ventricular hypertrophy (LVH) and microalbuminuria (MA). This cross-sectional study included CRHTN (n=51) and UCRHTN (n=38) patients for evaluating body mass index, ambulatory blood pressure monitoring, plasma adiponectin, leptin and resistin concentrations, pulse wave velocity (PWV), MA and echocardiography. Leptin and resistin levels were higher in UCRHTN, whereas adiponectin levels were lower in this same subgroup. Similarly, arterial stiffness, LVH and MA were higher in UCRHTN subgroup. Adiponectin levels negatively correlated with PWV (r=-0.42, P<0.01), and MA (r=-0.48, P<0.01) only in UCRHTN. Leptin was positively correlated with PWV (r=0.37, P=0.02) in UCRHTN subgroup, whereas resistin was not correlated with TOD in both subgroups. Adiponectin is associated with arterial stiffness and renal injury in UCRHTN patients, whereas leptin is associated with arterial stiffness in the same subgroup. Taken together, our results showed that those adipokines may contribute to vascular and renal damage in UCRHTN patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Machado-Joseph disease (MJD/SCA3) is the most frequent spinocerebellar ataxia, characterized by brainstem, basal ganglia and cerebellar damage. Few magnetic resonance imaging based studies have investigated damage in the cerebral cortex. The objective was to determine whether patients with MJD/SCA3 have cerebral cortex atrophy, to identify regions more susceptible to damage and to look for the clinical and neuropsychological correlates of such lesions. Forty-nine patients with MJD/SCA3 (mean age 47.7 ± 13.0 years, 27 men) and 49 matched healthy controls were enrolled. All subjects underwent magnetic resonance imaging scans in a 3 T device, and three-dimensional T1 images were used for volumetric analyses. Measurement of cortical thickness and volume was performed using the FreeSurfer software. Groups were compared using ancova with age, gender and estimated intracranial volume as covariates, and a general linear model was used to assess correlations between atrophy and clinical variables. Mean CAG expansion, Scale for Assessment and Rating of Ataxia (SARA) score and age at onset were 72.1 ± 4.2, 14.7 ± 7.3 and 37.5 ± 12.5 years, respectively. The main findings were (i) bilateral paracentral cortex atrophy, as well as the caudal middle frontal gyrus, superior and transverse temporal gyri, and lateral occipital cortex in the left hemisphere and supramarginal gyrus in the right hemisphere; (ii) volumetric reduction of basal ganglia and hippocampi; (iii) a significant correlation between SARA and brainstem and precentral gyrus atrophy. Furthermore, some of the affected cortical regions showed significant correlations with neuropsychological data. Patients with MJD/SCA3 have widespread cortical and subcortical atrophy. These structural findings correlate with clinical manifestations of the disease, which support the concept that cognitive/motor impairment and cerebral damage are related in disease.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: To assess risk and protective factors for chronic noncommunicable diseases (CNCD) and to identify social inequalities in their distribution among Brazilian adults. METHODS: The data used were collected in 2007 through VIGITEL, an ongoing population-based telephone survey. This surveillance system was implemented in all of the Brazilian State capitals, over 54,000 interviews were analyzed. Age-adjusted prevalence ratios for trends at different schooling levels were calculated using Poisson regression with linear models. RESULTS: These analyses have shown differences in the prevalence of risk and protective factors for CNCD by gender and schooling. Among men, the prevalence ratios of overweight, consumption of meat with visible fat, and dyslipidemia were higher among men with more schooling, while tobacco use, sedentary lifestyle, and high-blood pressure were lower. Among women, tobacco use, overweight, obesity, high-blood pressure and diabetes were lower among men with more schooling, and consumption of meat with visible fat and sedentary lifestyles were higher. As for protective factors, fruit and vegetables intake and physical activity were higher in both men and women with more schooling. CONCLUSION: Gender and schooling influence on risk and protective factors for CNCD, being the values less favorable for men. vigitel is a useful tool for monitoring these factors amongst the Brazilian population.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Leptin, thyroglobulin and diacylglycerol O-acyltransferase play important roles in fat metabolism. Fat deposition has an influence on meat quality and consumers' choice. The aim of this study was to determine allele and genotype frequencies of polymorphisms of the bovine genes, which encode leptin (LEP), thyroglobulin (TG) and diacylglycerol O-acyltransferase (DGAT1). A further objective was to establish the effects of these polymorphisms on meat characteristics. We genotyped 147 animals belonging to the Nelore (Bos indicus), Canchim (5/8 Bos taurus + 3/8 Bos indicus), Rubia Gallega X Nelore (1/2 Bos taurus + 1/2 Bos indicus), Brangus Three-way cross (9/16 Bos taurus + 7/16 Bos indicus) and Braunvieh Three-way cross (3/4 Bos taurus + 1/4 Bos indicus) breeds. Backfat thickness, total lipids, marbling score, ribeye area and shear force were fitted, using the General Linear Model (GLM) procedure of the SAS software. The least square means of genotypes and genetic groups were compared using Tukey's test. Allele frequencies vary among the genetic groups, depending on Bos indicus versus Bos taurus influence. The LEP polymorphism segregates in pure Bos indicus Nelore animals, which is a new finding. The T allele of TG is fixed in Nelore, and DGAT1 segregates in all groups, but the frequency of allele A is lower in Nelore animals. The results showed no association between the genotypes and traits studied, but a genetic group effect on these traits was found. So, the genetic background remains relevant for fat deposition and meat tenderness, but the gene markers developed for Bos taurus may be insufficient for Bos indicus.