924 resultados para Dairy cattle Breeding Australia Statistics Data processing
Resumo:
The technological change is nowadays, comprehended as a playing field which involves cultural and economic processes of appreciation and depreciation of the social aspects of family unit. The exclusion of small producers from the activity is used as an argument to characterize that in the contemporary intercapitalist competition, the family ways of production take up restrict social positions of a technical progress and of a cultural and economic appreciation. The state, a coparcener of the modernization process, has its relevance as a financing agent, a technical capacitor, an infrastructure propitiator, that is, through macro and microeconomic policies which can create sustainable conditions to permit, mainly, not only the family producer to be inserted in the activity, but, above all, to remain in it. This way, this study aims to identify and analyze the family producer, through its limits and potentialities, with a thesis that this would be the main agent responsible for boosting the Brazilian milk production in quantity and quality. Therefore, results were compared obtained from a field survey with data collection via semi-structured open interviews in a sample of 108 producers effectively respondents, namely: 59 family farmers with active DAP (research focus) and 49 employers producers the municipality of Monte Alegre de Minas - MG. Technological indices were used to identify the developmental stage of the producers, thus allowing a comparative study between them. The field research covered all rural municipality of Monte Alegre de Minas – MG and, the result found that the majority of family farmers presented lower rates than technological employers producers. However, it allowed us to state also that the producer family and assisted by public policies, can be the agent of transformation of dairy farming.
Resumo:
A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
This paper is part of a special issue of Applied Geochemistry focusing on reliable applications of compositional multivariate statistical methods. This study outlines the application of compositional data analysis (CoDa) to calibration of geochemical data and multivariate statistical modelling of geochemistry and grain-size data from a set of Holocene sedimentary cores from the Ganges-Brahmaputra (G-B) delta. Over the last two decades, understanding near-continuous records of sedimentary sequences has required the use of core-scanning X-ray fluorescence (XRF) spectrometry, for both terrestrial and marine sedimentary sequences. Initial XRF data are generally unusable in ‘raw-format’, requiring data processing in order to remove instrument bias, as well as informed sequence interpretation. The applicability of these conventional calibration equations to core-scanning XRF data are further limited by the constraints posed by unknown measurement geometry and specimen homogeneity, as well as matrix effects. Log-ratio based calibration schemes have been developed and applied to clastic sedimentary sequences focusing mainly on energy dispersive-XRF (ED-XRF) core-scanning. This study has applied high resolution core-scanning XRF to Holocene sedimentary sequences from the tidal-dominated Indian Sundarbans, (Ganges-Brahmaputra delta plain). The Log-Ratio Calibration Equation (LRCE) was applied to a sub-set of core-scan and conventional ED-XRF data to quantify elemental composition. This provides a robust calibration scheme using reduced major axis regression of log-ratio transformed geochemical data. Through partial least squares (PLS) modelling of geochemical and grain-size data, it is possible to derive robust proxy information for the Sundarbans depositional environment. The application of these techniques to Holocene sedimentary data offers an improved methodological framework for unravelling Holocene sedimentation patterns.
Resumo:
Background To our knowledge, there is little study on the interaction between nutrient availability and molecular structure changes induced by different processing methods in dairy cattle. The objective of this study was to investigate the effect of heat processing methods on interaction between nutrient availability and molecular structure in terms of functional groups that are related to protein and starch inherent structure of oat grains with two continued years and three replication of each year. Method The oat grains were kept as raw (control) or heated in an air-draft oven (dry roasting: DO) at 120 °C for 60 min and under microwave irradiation (MIO) for 6 min. The molecular structure features were revealed by vibrational infrared molecular spectroscopy. Results The results showed that rumen degradability of dry matter, protein and starch was significantly lower (P <0.05) for MIO compared to control and DO treatments. A higher protein α-helix to β-sheet and a lower amide I to starch area ratio were observed for MIO compared to DO and/or raw treatment. A negative correlation (−0.99, P < 0.01) was observed between α-helix or amide I to starch area ratio and dry matter. A positive correlation (0.99, P < 0.01) was found between protein β-sheet and crude protein. Conclusion The results reveal that oat grains are more sensitive to microwave irradiation than dry heating in terms of protein and starch molecular profile and nutrient availability in ruminants.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Résumé : L’entrainement sportif est « un processus de perfectionnement de l’athlète dirigé selon des principes scientifiques et qui, par des influences planifiées et systématiques (charges) sur la capacité de performance, vise à mener le sportif vers des performances élevées et supérieures dans un sport ou une discipline sportive » (Harre, 1982). Un entrainement sportif approprié devrait commencer dès l’enfance. Ainsi, le jeune sportif pourrait progressivement et systématiquement développer son corps et son esprit afin d’atteindre l’excellence sportive (Bompa, 2000; Weineck, 1997). Or plusieurs entraineurs, dans leur tentative de parvenir à des résultats de haut niveau rapidement, exposent les jeunes athlètes à une formation sportive très spécifique et rigoureuse, sans prendre le temps de développer convenablement les aptitudes physiques et motrices et les habiletés motrices fondamentales sous-jacentes aux habiletés sportives spécifiques (Bompa, 2000), d’où l’appellation « spécialisation hâtive ». Afin de contrer les conséquences néfastes de la spécialisation hâtive, de nouvelles approches d’entrainement ont été proposées. Une des façons d’y arriver consisterait notamment à pratiquer différents sports en bas âge (Fraser-Thomas, Côté et Deakin, 2008; Gould et Carson, 2004; Judge et Gilreath, 2009; LeBlanc et Dickson, 1997; Mostafavifar, Best et Myer, 2013), d’où l’appellation « diversification sportive ». Plusieurs organisations sportives et professionnelles ont décidé de valoriser et de mettre en place des programmes basés sur la diversification sportive (Kaleth et Mikesky, 2010). C’est donc à la suite d’une prise de conscience des effets néfastes de la spécialisation hâtive que des professionnels de l’activité physique d’une école secondaire du Québec (éducateur physique, kinésiologue et agent de développement sportif) ont mis en place un programme multisports-études novateur au premier cycle du secondaire, inspiré des sciences du sport et des lignes directrices du modèle de développement à long terme de l’athlète (DLTA) (Balyi, Cardinal, Higgs, Norris et Way, 2005). Le présent projet de recherche porte sur le développement des aptitudes physiques et motrices chez de jeunes sportifs inscrits à un programme de spécialisation sportive et de jeunes sportifs inscrits à un programme de diversification sportive à l’étape « S’entrainer à s’entrainer » (12 à 16 ans) du modèle de développement à long terme de l’athlète (Balyi et al., 2005). L’objectif principal de cette étude est de rendre compte de l’évolution des aptitudes physiques et motrices de jeunes élèves-athlètes inscrits, d’une part, à un programme sport-études soccer (spécialisation) et, d’autre part, à un programme multisports-études (diversification). Plus spécifiquement, cette étude tente de (a) dresser un portrait détaillé de l’évolution des aptitudes physiques et motrices des élèves-athlètes de chaque programme et de faire un parallèle avec la planification annuelle de chaque programme sportif et (b) de rendre compte des différences d’aptitudes physiques et motrices observées entre les deux programmes. Le projet de recherche a été réalisé dans une école secondaire de la province de Québec. Au total, 53 élèves-athlètes de première secondaire ont été retenus pour le projet de recherche selon leur volonté de participer à l’étude, soit 23 élèves-athlètes de première secondaire inscrits au programme sport-études soccer et 30 élèves-athlètes de première secondaire inscrits au programme multisports-études. Les élèves-athlètes étaient tous âgés de 11 à 13 ans. Treize épreuves standardisées d’aptitudes physiques et motrices ont été administrées aux élèves-athlètes des deux programmes sportifs en début, en milieu et en fin d’année scolaire. Le traitement des données s’est effectué à l’aide de statistiques descriptives et d’une analyse de variance à mesures répétées. Les résultats révèlent que (a) l’ensemble des aptitudes physiques et motrices des élèves-athlètes des deux programmes sportifs se sont améliorées au cours de l’année scolaire, (b) il est relativement facile de faire un parallèle entre l’évolution des aptitudes physiques et motrices des élèves-athlètes et la planification annuelle de chaque programme sportif, (c) les élèves-athlètes du programme multisports-études ont, en général, des performances semblables à celles des élèves-athlètes du programme sport-études soccer et (d) les élèves-athlètes du programme sport-études soccer ont, au cours de l’année scolaire, amélioré davantage leur endurance cardiorespiratoire, alors que ceux du programme multisports-études ont amélioré davantage (a) leur vitesse segmentaire des bras, (b) leur agilité à l’épreuve de course en cercle et (c) leur puissance musculaire des membres inférieurs, confirmant ainsi que les aptitudes physiques et motrices développées chez de jeunes athlètes qui se spécialisent tôt sont plutôt spécifiques au sport pratiqué (Balyi et al., 2005; Bompa, 1999; Cloes, Delfosse, Ledent et Piéron, 1994; Mattson et Richards, 2010), alors que celles développées à travers la diversification sportive sont davantage diversifiées (Coakley, 2010; Gould et Carson, 2004; White et Oatman, 2009). Ces résultats peuvent s’expliquer par (a) la spécificité ou la diversité des tâches proposées durant les séances d’entrainement, (b) le temps consacré à chacune de ces tâches et (c) les exigences reliées à la pratique du soccer comparativement aux exigences reliées à la pratique de plusieurs disciplines sportives. Toutefois, les résultats obtenus restent complexes à interpréter en raison de différents biais : (a) la maturation physique, (b) le nombre d’heures d’entrainement effectué au cours de l’année scolaire précédente, (c) le nombre d’heures d’entrainement offert par les deux programmes sportifs à l’étude et (d) les activités physiques et sportives pratiquées à l’extérieur de l’école. De plus, cette étude ne permet pas d’évaluer la qualité des interventions et des exercices proposés lors des entrainements ni la motivation des élèves-athlètes à prendre part aux séances d’entrainement ou aux épreuves physiques et motrices. Finalement, il serait intéressant de reprendre la présente étude auprès de disciplines sportives différentes et de mettre en évidence les contributions particulières de chaque discipline sportive sur le développement des aptitudes physiques et motrices de jeunes athlètes.
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
As escolas portuguesas do ensino não superior estão dotadas com infraestruturas e equipamentos que permitem trazer o mundo para dentro da sala de aula, tornando o processo de ensino e de aprendizagem mais rico e motivador para os alunos. A adoção institucional de uma plataforma que segue os princípios da web social, o SAPO Campus (SC), definida pela abertura, partilha, integração, inovação e personalização, pode ser catalisadora de processos de mudança e inovação. O presente estudo teve como finalidade acompanhar o processo de adoção do SC em cinco escolas, bem como analisar o impacto no processo de ensino e de aprendizagem e a forma como os alunos e professores se relacionam com esta tecnologia. As escolas envolvidas foram divididas em dois grupos: o primeiro grupo, constituído por três escolas onde o acompanhamento teve uma natureza mais interventiva e presente, enquanto que no segundo grupo, composto por duas escolas, foram apenas observadas as dinâmicas que se desenvolveram no processo de adoção e utilização do SC. No presente estudo, que se assume como um estudo longitudinal de multicasos, foram aplicadas técnicas de tratamento de dados como a estatística descritiva, a análise de conteúdo e a Social Network Analysis (SNA), com o objetivo de, através de uma triangulação permanente, proceder a uma análise dos impactos observados pela utilização do SC. Estes impactos podem ser situados em três níveis diferentes: relativos à instituição, aos professores e aos alunos. Ao nível da adoção institucional de uma tecnologia, verificou-se que essa adoção passa uma mensagem a toda a organização e que, no caso do SC, apela à participação coletiva num ambiente aberto onde as hierarquias se dissipam. Verificou-se ainda que deve implicar o envolvimento dos alunos em atividades significativas e a adoção de estratégias dinâmicas, preferencialmente integradas num projeto mobilizador. A adoção do SC foi ainda catalisadora de dinâmicas que provocaram mudanças nos padrões de consumo e de produção de conteúdos bem como de uma atitude diferente perante o papel da web social no processo de ensino e aprendizagem. As conclusões apontam ainda no sentido da identificação de um conjunto de fatores, observados no estudo, que tiveram impacto no processo de adoção como o papel das lideranças, a importância da formação de professores, a cultura das escolas, a integração num projeto pedagógico e, a um nível mais primário, as questões do acesso à tecnologia. Algumas comunidades construídas à volta do SAPO Campus, envolvendo professores, alunos e a comunidade, evoluíram no sentido da autossustentação, num percurso de reflexão sobre as práticas pedagógicas e partilha de experiências.
Resumo:
Dissertação de Mestrado, Engenharia Zootécnica, 18 de Julho de 2016, Universidade dos Açores.
Resumo:
The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
Assessing the fit of a model is an important final step in any statistical analysis, but this is not straightforward when complex discrete response models are used. Cross validation and posterior predictions have been suggested as methods to aid model criticism. In this paper a comparison is made between four methods of model predictive assessment in the context of a three level logistic regression model for clinical mastitis in dairy cattle; cross validation, a prediction using the full posterior predictive distribution and two “mixed” predictive methods that incorporate higher level random effects simulated from the underlying model distribution. Cross validation is considered a gold standard method but is computationally intensive and thus a comparison is made between posterior predictive assessments and cross validation. The analyses revealed that mixed prediction methods produced results close to cross validation whilst the full posterior predictive assessment gave predictions that were over-optimistic (closer to the observed disease rates) compared with cross validation. A mixed prediction method that simulated random effects from both higher levels was best at identifying the outlying level two (farm-year) units of interest. It is concluded that this mixed prediction method, simulating random effects from both higher levels, is straightforward and may be of value in model criticism of multilevel logistic regression, a technique commonly used for animal health data with a hierarchical structure.
Resumo:
Background: Copy number variations (CNVs) have been shown to account for substantial portions of observed genomic variation and have been associated with qualitative and quantitative traits and the onset of disease in a number of species. Information from high-resolution studies to detect, characterize and estimate population-specific variant frequencies will facilitate the incorporation of CNVs in genomic studies to identify genes affecting traits of importance. Results: Genome-wide CNVs were detected in high-density single nucleotide polymorphism (SNP) genotyping data from 1,717 Nelore (Bos indicus) cattle, and in NGS data from eight key ancestral bulls. A total of 68,007 and 12,786 distinct CNVs were observed, respectively. Cross-comparisons of results obtained for the eight resequenced animals revealed that 92 % of the CNVs were observed in both datasets, while 62 % of all detected CNVs were observed to overlap with previously validated cattle copy number variant regions (CNVRs). Observed CNVs were used for obtaining breed-specific CNV frequencies and identification of CNVRs, which were subsequently used for gene annotation. A total of 688 of the detected CNVRs were observed to overlap with 286 non-redundant QTLs associated with important production traits in cattle. All of 34 CNVs previously reported to be associated with milk production traits in Holsteins were also observed in Nelore cattle. Comparisons of estimated frequencies of these CNVs in the two breeds revealed 14, 13, 6 and 14 regions in high (>20 %), low (<20 %) and divergent (NEL > HOL, NEL < HOL) frequencies, respectively. Conclusions: Obtained results significantly enriched the bovine CNV map and enabled the identification of variants that are potentially associated with traits under selection in Nelore cattle, particularly in genome regions harboring QTLs affecting production traits.