998 resultados para Genética - Processamento de dados


Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste estudo foi investigar, por meio de dados simulados, o efeito da heterogeneidade de variância residual entre grupos de contemporâneos (GC) sobre as avaliações genéticas de bovinos de corte, e comparar o uso de uma avaliação genética ponderada (R¹Isigmae²) em relação à avaliação que pressupõe homogeneidade de variância (R=Isigmae²). A característica estudada foi ganho de peso pós-desmame corrigido para 345 dias, sendo esta simulada com variância fenotípica de 300 kg² e herdabilidade igual a 0,4. A estrutura de um conjunto real de dados foi utilizada para fornecer os GC e os pais referentes às observações de cada animal. Cinco níveis de heterogeneidade de variância residual foram considerados de forma que os componentes de variância fossem, na média, iguais aos da situação de homogeneidade de variância. Na medida em que níveis mais acentuados de heterogeneidade de variância residual foram considerados, os animais foram selecionados dos GC com maior variabilidade, especialmente com pressão de seleção intensa. em relação à consistência de predição, os produtos e as vacas tiveram seus valores genéticos preditos mais afetados pela heterogeneidade de variância residual do que os touros. O fator de ponderação utilizado reduziu, mas não eliminou o efeito da heterogeneidade de variância. As avaliações genéticas ponderadas apresentaram resultados iguais ou superiores àqueles obtidos pelas avaliações que assumiram homogeneidade de variância. Mesmo quando não necessário, o uso de avaliações ponderadas produziu resultados não inferiores às avaliações que assumiram homogeneidade de variância.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilizaram-se 17.767 registros de peso de 4.210 cordeiros da raça Santa Inês com o objetivo de comparar modelos de regressão aleatória com diferentes estruturas para modelar a variância residual em estudos genéticos da curva de crescimento. Os efeitos fixos incluídos na análise foram: grupo contemporâneo e idade da ovelha no parto. As regressões fixas e aleatórias foram ajustadas por meio de polinômios de Legendre de ordens 4 e 3, respectivamente. A variância residual foi ajustada por meio de classes heterogêneas e por funções de variância empregando polinômios ordinários e de Legendre de ordens 2 a 8. O modelo considerando homogeneidade de variâncias residuais mostrou-se inadequado. de acordo com os critérios utilizados, a variância residual contendo sete classes heterogêneas proporcionou melhor ajuste, embora um mais parcimonioso, com cinco classes, pudesse ser utilizado sem perdas na qualidade de ajuste da variância nos dados. O ajuste de funções de variância com qualquer ordem foi melhor que o obtido por meio de classes. O polinômio ordinário de ordem 6 proporcionou melhor ajuste entre as estruturas testadas. A modelagem do resíduo interferiu nas estimativas de variâncias e parâmetros genéticos. Além da alteração da classificação dos reprodutores, a magnitude dos valores genéticos preditos apresenta variações significativas, de acordo com o ajuste da variância residual empregado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, changes have been occurring in the telecommunications industry, allied to competition driven by the policies of privatization and concessions, have fomented the world market irrefutably causing the emergence of a new reality. The reflections in Brazil have become evident due to the appearance of significant growth rates, getting in 2012 to provide a net operating income of 128 billion dollars, placing the country among the five major powers in the world in mobile communications. In this context, an issue of increasing importance to the financial health of companies is their ability to retain their customers, as well as turn them into loyal customers. The appearance of infidelity from customer operators has been generating monthly rates shutdowns about two to four percent per month accounting for business management one of its biggest challenges, since capturing a new customer has meant an expenditure greater than five times to retention. For this purpose, models have been developed by means of structural equation modeling to identify the relationships between the various determinants of customer loyalty in the context of services. The original contribution of this thesis is to develop a model for loyalty from the identification of relationships between determinants of satisfaction (latent variables) and the inclusion of attributes that determine the perceptions of service quality for the mobile communications industry, such as quality, satisfaction, value, trust, expectation and loyalty. It is a qualitative research which will be conducted with customers of operators through simple random sampling technique, using structured questionnaires. As a result, the proposed model and statistical evaluations should enable operators to conclude that customer loyalty is directly influenced by technical and operational quality of the services offered, as well as provide a satisfaction index for the mobile communication segment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O processo constante de avaliação técnica e econômica dos sistemas de colheita de madeira é intrínseco às empresas florestais, devido ao fato de corresponder a uma fase de suma importância que despende elevado investimento financeiro. No experimento deste trabalho, estudaram-se o rendimento operacional e custos operacionais e de produção do processador florestal Hypro. A análise técnica englobou estudos de tempos e movimentos pelo método de tempo contínuo. O rendimento operacional foi determinado através do volume, em metros cúbicos de madeira processada. A análise econômica incorporou os parâmetros do custo operacional, custo de processamento da madeira e rendimento energético. A análise dos dados evidenciou que o rendimento operacional por hora efetiva de trabalho foi de 38 árvores e, em metros cúbicos sem casca por hora efetiva de trabalho, de 11,68 m³ h-1, com custo de processamento de madeira sem casca de US$ 6.85 por metro cúbico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital signal processing (DSP) aims to extract specific information from digital signals. Digital signals are, by definition, physical quantities represented by a sequence of discrete values and from these sequences it is possible to extract and analyze the desired information. The unevenly sampled data can not be properly analyzed using standard techniques of digital signal processing. This work aimed to adapt a technique of DSP, the multiresolution analysis, to analyze unevenly smapled data, to aid the studies in the CoRoT laboratory at UFRN. The process is based on re-indexing the wavelet transform to handle unevenly sampled data properly. The was efective presenting satisfactory results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No presente trabalho foram coletados acessos de Eichhornia crassipes (aguapé) nos reservatórios das hidrelétricas de Barra Bonita, Bariri, Três Irmãos, Ilha Solteira, Salto Grande, Promissão, Ibitinga, Nova Avanhandava, Mogi-Guaçu, Euclides da Cunha, Jaguari, Jurumirim, Jupiá, Paraibuna e Porto Primavera, do Estado de São Paulo. Estes acessos foram submetidos a um estudo de variabilidade genética por meio de RAPD. Os primers utilizados foram OP X02, OP X07, OP X11 e OP P10 (TTCCGCCACC, GAGCGAGGCT, GGAGCCTCAG, TCCCGCCTAG, respectivamente). Dentre os acessos coletados e analisados, 21 apresentaram índice de identidade genética acima de 0,90. O dendrograma gerado com dados entre populações revelou forte coerência com a distribuição geográfica dos reservatórios que continham as plantas de aguapé. A variabilidade genética encontrada entre os acessos coletados nos diferentes reservatórios estudados foi elevada, considerando que a principal via de reprodução dessa espécie é a vegetativa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the process of the salt production, the first the salt crystals formed are disposed of as industrial waste. This waste is formed basically by gypsum, composed of calcium sulfate dihydrate (CaSO4.2H2O), known as carago cru or malacacheta . After be submitted the process of calcination to produce gypsum (CaSO4.0,5H2O), can be made possible its application in cement industry. This work aims to optimize the time and temperature for the process of calcination of the gypsum (carago) for get beta plaster according to the specifications of the norms of civil construction. The experiments involved the chemical and mineralogical characterization of the gypsum (carago) from the crystallizers, and of the plaster that is produced in the salt industry located in Mossoró, through the following techniques: x-ray diffraction (XRD), x-ray fluorescence (FRX), thermogravimetric analysis (TG/DTG) and scanning electron microscopy (SEM) with EDS. For optimization of time and temperature of the process of calcination was used the planning three factorial with levels with response surfaces of compressive mechanical tests and setting time, according norms NBR-13207: Plasters for civil construction and x-ray diffraction of plasters (carago) beta obtained in calcination. The STATISTICA software 7.0 was used for the calculations to relate the experimental data for a statistical model. The process for optimization of calcination of gypsum (carago) occurred in the temperature range from 120° C to 160° C and the time in the range of 90 to 210 minutes in the oven at atmospheric pressure, it was found that with the increase of values of temperature of 160° C and time calcination of 210 minutes to get the results of tests of resistance to compression with values above 10 MPa which conform to the standard required (> 8.40) and that the X-ray diffractograms the predominance of the phase of hemidrato beta, getting a beta plaster of good quality and which is in accordance with the norms in force, giving a by-product of the salt industry employability in civil construction

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No melhoramento genético de espécies florestais, uma população base ou indivíduos superiores pré-selecionados tem importância fundamental para a manutenção do programa. Indivíduos de melhores procedências e de ampla base genética propiciam a obtenção de ganhos de forma contínua. O objetivo deste trabalho foi avaliar a diversidade genética em duas populações-núcleo de Eucalyptus grandis. Foram avaliados 39 indivíduos, sendo 19 pertencentes à população 1 e 20, à população 2, utilizando-se 14 primers microssatélite. Os fragmentos foram identificados e analisados a partir dos programas GeneScan e Genotyper, utilizando-se um sequenciador automático ABI Prism 3100. O número de alelos encontrados para cada primer variou de cinco a 15 para a população 1 e, de 8 a 18 para a população 2. A heterozigosidade estimada foi maior na população 2, 0,869, contra 0,843 na população 1. A média da distância genética entre os indivíduos da população 1 foi 0,6220 e na população 2 foi 0,6112. Com a caracterização molecular dos indivíduos destas populações foi construído um banco de dados que permitirá, a partir dos parâmetros de genética de populações, monitorar esses programas de melhoramento em diferentes ciclos de seleção.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a professor in Curso de Licenciatura em Letras, from Campus Avançado Profa Maria Eliza Albuquerque Maia (CAMEAM),do Estado do Estado do Rio Grande do Norte (UERN), in the town of Pau do Ferros, in the state of Rio Grande do Norte, we had the chance to carry out several writing activities , as well as guiding re-writing activities for the texts produced. From this experience, we started looking at the need of reflecting upon the writing process in higher education. Thus, we aim at analyzing, in this research, the methodology used in the moment of carrying out the writing practices activities in higher education, investigating, in particular, the rewriting practices, concerning the operations used for carrying out such activities, as well as the sense effects produced from the alterations which were made in the texts. Our theoretical foundation is grounded on a conception of text as a verbal action , what reveals a socio-interactional view of the language (MARCUSCHI, 2008; SAUTCHUK, 2003). As the production of written texts, our research focus, we assume that, for this activity, we deal with distinct figures (active writer and internal reader), so that we can, apart from writing, reflecting upon our writing and, this way, deciding about operations which are carried out to make the alterations which are necessary to the rewriting of our texts (SAUTCHUK, 2003). Still about the theoretical foundations used in this research, we made use of the theories from the Textual Analysis of Discourse (TAD) which discusses the belief on the evidence on the existence of the texts, which is opposite to the fixist view of textuality which believes that the texts exist by themselves. (ADAM, 2008; [2005]2010). Under this perspective, we have also adopted, the concepts which come from genetics criticism which is concerned about the relation between text and genesis, using as objects documents which bring traits of the text in progress, on the ground that the text is the result of work in progress, and the writing practice, on the other hand, as an activity in a continuous movement (HAY, [1975]2002; DE BIASI, [2000]2010; GRÉSILLON, 1989; [1990]2008; [1992]2002; SALLES, 2008a). The methodology in this research is an ethnography-based one, an approach which focuses on the process, as well as is meaning-based. To understand the objectives proposed in our research, we made use of different procedures of collecting data which include an ethnographic study, such as: observation, note-taking, document analysis. The data which were analyzed were collected during the semester of 2008.2, in a first term classroom of Curso de Letras from CAMEAM, when we were able to collect twenty-one written texts and all of them were rewritten based on rewriting activities, what provides a corpus of forty-two texts which will be analyzed based on the linguistics operations identified by Generative Grammar and adopted by Lebrave and Grésillon (2009). From these analyses, we were able to confirm that writing is a process, and rewriting has become an extremely important activity for this process. Still due to these data, we observed that substitution was the most used operation by text authors. We believe that this result is justified by the fact that the substitution, according to what proposes the Genetic Criticism, constitutes the source of all erasure, from which one can easily make a change in writing. Regarding the operations of addition and deletion, we found that they were used in quantitative terms, almost equivalently, which can be explained when we see that the two operations require, by the author of the text, different strategies from those used for the replacement, what includes , respectively, adding or removing a segment. Finally, we found out that the shift operation was the least used, since it works with a segment that will not be replaced, added or deleted, but transferred to another place of text, which requires a greater ability of the author to perform this operation and not compromising the meaning of his/her writing. As a result, we hope to contribute to the reflection on the teaching of writing, considering, in a particular way, those with a Bachelor in Arts. Our analysis will contribute to the teaching of Portuguese language, specifically for activities that guide the production of texts in order to explore with students the ability to rewrite their own text