984 resultados para Anos-padrão


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os objetivos neste trabalho foram avaliar diferentes modelos, em relação aos efeitos maternos considerados, para características de crescimento e estimar os parâmetros genéticos para essas características em bovinos da raça Canchim, por meio de análises uni, bi e multicaracterísticas. As características peso ao nascimento, peso ao desmame, pesos padronizados para 12, 18, 24 e 30 meses de idade em machos e fêmeas e peso adulto de fêmeas foram analisadas utilizando-se quatro modelos com os efeitos aleatórios adicionados em sequência. Os efeitos maternos influenciaram os pesos do nascimento aos 2 anos de idade e o peso à desmama foi o mais afetado pelos efeitos maternos. As estimativas de herdabilidade direta obtidas das análises bi e multicaracterísticas foram superiores àquelas obtidas das análises unicaracterísticas. As estimativas de herdabilidade do efeito genético direto obtidas usando análise multicaracterística foram 0,39 para peso ao nascer; 0,31 para peso à desmama; 0,29 para peso aos 12 meses; 0,28 para peso aos 18 meses; 0,26 para peso aos 24 meses; 0,30 para peso aos 30 meses; e 0,38 para peso à idade adulta. As correlações genéticas estimadas entre pesos obtidos em idades jovens com peso à idade adulta foram altas, acima de 0,79. A seleção com base em características de crescimento em qualquer idade pode promover ganhos genéticos moderados no peso corporal de animais da raça Canchim em todas as idades-padrão, inclusive nos pesos ao nascer e à idade adulta das fêmeas. É importante considerar nas análises os pesos prévios à seleção para estimar parâmetros genéticos para pesos após a seleção. A análise multicaracterística é a mais indicada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Foram analisados registros de 1.698 animais de um rebanho Caracu selecionado para peso pós-desmame entre os anos 1979 e 2002 com o objetivo de verificar a existência de variabilidade genética aditiva nas características de crescimento e suas interpretações. As características analisadas foram: peso ao nascer (PN), peso padronizado aos 120 dias (P120), peso ao desmame padronizado aos 210 dias (P210), peso de machos ao final da prova de ganho de peso (P378) e ganhos diários do nascimento ao desmame (GND), dos machos na prova de ganho de peso (G112), do desmame ao sobreano em machos (GDSm), peso de fêmeas padronizado aos 550 dias (P550) e ganhos das fêmeas em pastagem do desmame ao sobreano (GDSf), além da altura da garupa a um ano em machos (ALTm) e ao sobreano em fêmeas (ALTf). Os componentes de (co) variâncias, as herdabilidades e as correlações genéticas foram estimados por máxima verossimilhança restrita não-derivativa utilizando-se o software MTDFREML. As estimativas de herdabilidade e os erros-padrão foram iguais a 0,34±0,06; 0,11±0,05; 0,13±0,05; 0,11±0,05; 0,35±0,09; 0,42±0,09; 0,31±0,09; 0,13±0,06; 0,21±0,08; 0,55±0,11; 0,51±0,09 para PN, P120, P210, GND, P378, P550, GDSm, GDSf, G112, ALTm e ALTf, respectivamente. As correlações genéticas entre as características foram de moderadas a altas e positivas, com exceção de algumas correlações com a característica GDSf e o PN. A seleção com base no desempenho do próprio indivíduo, como tem sido realizada, proporciona progresso genético nas características de seleção direta, assim como em algumas características com alta correlação genética.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Foram ajustadas 7239 curvas de lactação de vacas Caracu, controladas semanalmente entre os anos de 1978 a 1988, pertencentes à Fazenda Chiqueirão, Poços de Caldas, MG. As funções utilizadas foram a linear hiperbólica (FLH), a quadrática logarítmica (FQL), a gama incompleta (FGI) e a polinomial inversa (FPI). Os parâmetros foram estimados por meio de regressões não lineares, usando-se processos iterativos. A verificação da qualidade do ajuste baseou-se no coeficiente de determinação ajustado (R²A), no teste de Durbin-Watson (DW) e nas médias e desvios-padrão estimados para os parâmetros e funções dos parâmetros dos modelos. Para a curva média, os R²A foram superiores a 0,90 para todas as funções. Bons ajustes, baseados nos R²A>0,80 foram obtidos, respectivamente, por 25,2%, 39,1%, 31,1% e 28,4% das lactações ajustadas pelas funções FLH, FQL, FGI e FPI. de acordo com o teste de DW, bons ajustes foram proporcionados para 29,4% das lactações ajustadas pela FLH, 54,9% pela FQL, 34,9% pela FGI e 29,6% pela FPI. Para ambos os critérios, a FQL foi superior às demais funções, indicando grande variação nas formas das curvas de lactação geradas pelos ajustes individuais. Curvas atípicas foram estimadas pelas funções, com picos ocorrendo antes do parto e algumas vezes após o término da lactação. Todas as funções apresentaram problemas quando ajustaram dados individuais.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a study in quality of health care, with focus on consulting appointment. The main purpose is to define a statistical model and propose a quality grade of the consulting appointment time. The time considered is that from the day the patient get the appointment done to the day the consulting is realized. It is used reliability techniques and functions that has as main characteristic the analysis of data regarding the time of occurrence certain event. It is gathered a random sample of 1743 patients in the appointment system of a University Hospital - the Hospital Universitário Onofre Lopes - of the Federal University of Rio Grande do Norte, Brazil. The sample is randomly stratified in terms on clinical specialty. The data were analyzed against the parametric methods of the reliability statistics and the adjustment of the regression model resulted in the Weibull distribution being best fit to data. The quality grade proposed is based in the PAHO criteria for a consulting appointment and result that no clinic got the PAHO quality grade. The quality grade proposed could be used to define priority for improvement and as criteria to quality control

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The knowledge management has received major attention from product designers because many of the activities within this process have to be creative and, therefore, they depend basically on the knowledge of the people who are involved in the process. Moreover, Product Development Process (PDP) is one of the activities in which knowledge management manifests in the most critical form once it had the intense application of the knowledge. As a consequence, this thesis analyzes the knowledge management aiming to improve the PDP and it also proposes a theoretical model of knowledge management. This model uses five steps (creation, maintenance, dissemination, utilization and discard) through the verification of the occurrence of four types of knowledge conversion (socialization, externalization, combination and internalization) that it will improve the knowledge management in this process. The intellectual capital in Small and Medium Enterprises (SMEs) managed efficiently and with the participation of all employees has become the mechanism of the creation and transference processes of knowledge, supporting and, consequently, improving the PDP. The expected results are an effective and efficient application of the proposed model for the creation of the knowledge base within an organization (organizational memory) aiming a better performance of the PDP. In this way, it was carried out an extensive analysis of the knowledge management (instrument of qualitative and subjective evaluation) within the Design department of a Brazilian company (SEBRAE/RN). This analysis aimed to know the state-of-the-art of the Design department regarding the use of knowledge management. This step was important in order to evaluate in the level of the evolution of the department related to the practical use of knowledge management before implementing the proposed theoretical model and its methodology. At the end of this work, based on the results of the diagnosis, a knowledge management system is suggested to facilitate the knowledge sharing within the organization, in order words, the Design department

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este experimento foi conduzido para avaliar as exigências de triptofano e o padrão de recuperação do desempenho de poedeiras alimentadas com rações deficientes em triptofano. Foram utilizadas 160 poedeiras comerciais da linhagem Hisex White distribuídas em um delineamento inteiramente casualizado com cinco níveis de triptofano nas dietas (0,13; 0,15; 0,17; 0,19 e 0,21%), com oito repetições de quatro aves. As poedeiras permaneceram por duas semanas em adaptação (51 a 52 semanas), por seis semanas para avaliação da exigência de triptofano (53 a 58 semanas) e por quatro semanas para determinação do padrão de recuperação do desempenho (59 a 62 semanas). A produção e a massa de ovos foram prejudicadas quando as poedeiras foram alimentadas com rações contendo 0,13% de triptofano, no entanto, o desempenho foi recuperado após uma semana de alimentação com ração contendo 0,21% desse aminoácido. A qualidade interna dos ovos não foi influenciada pelos níveis de triptofano estudados (ingestão de 137,1 a 228,0 mg triptofano/dia). As exigências de triptofano foram estabelecidas entre 161 e 188 mg/dia, dependendo da característica avaliada (produção ou massa de ovos) e do modelo de regressão aplicado (polinomial, exponencial ou segmentado).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In academia, it is common to create didactic processors, facing practical disciplines in the area of Hardware Computer and can be used as subjects in software platforms, operating systems and compilers. Often, these processors are described without ISA standard, which requires the creation of compilers and other basic software to provide the hardware / software interface and hinder their integration with other processors and devices. Using reconfigurable devices described in a HDL language allows the creation or modification of any microarchitecture component, leading to alteration of the functional units of data path processor as well as the state machine that implements the control unit even as new needs arise. In particular, processors RISP enable modification of machine instructions, allowing entering or modifying instructions, and may even adapt to a new architecture. This work, as the object of study addressing educational soft-core processors described in VHDL, from a proposed methodology and its application on two processors with different complexity levels, shows that it s possible to tailor processors for a standard ISA without causing an increase in the level hardware complexity, ie without significant increase in chip area, while its level of performance in the application execution remains unchanged or is enhanced. The implementations also allow us to say that besides being possible to replace the architecture of a processor without changing its organization, RISP processor can switch between different instruction sets, which can be expanded to toggle between different ISAs, allowing a single processor become adaptive hybrid architecture, which can be used in embedded systems and heterogeneous multiprocessor environments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The monitoring of patients performed in hospitals is usually done either in a manual or semiautomated way, where the members of the healthcare team must constantly visit the patients to ascertain the health condition in which they are. The adoption of this procedure, however, compromises the quality of the monitoring conducted since the shortage of physical and human resources in hospitals tends to overwhelm members of the healthcare team, preventing them from moving to patients with adequate frequency. Given this, many existing works in the literature specify alternatives aimed at improving this monitoring through the use of wireless networks. In these works, the network is only intended for data traffic generated by medical sensors and there is no possibility of it being allocated for the transmission of data from applications present in existing user stations in the hospital. However, in the case of hospital automation environments, this aspect is a negative point, considering that the data generated in such applications can be directly related to the patient monitoring conducted. Thus, this thesis defines Wi-Bio as a communication protocol aimed at the establishment of IEEE 802.11 networks for patient monitoring, capable of enabling the harmonious coexistence among the traffic generated by medical sensors and user stations. The formal specification and verification of Wi-Bio were made through the design and analysis of Petri net models. Its validation was performed through simulations with the Network Simulator 2 (NS2) tool. The simulations of NS2 were designed to portray a real patient monitoring environment corresponding to a floor of the nursing wards sector of the University Hospital Onofre Lopes (HUOL), located at Natal, Rio Grande do Norte. Moreover, in order to verify the feasibility of Wi-Bio in terms of wireless networks standards prevailing in the market, the testing scenario was also simulated under a perspective in which the network elements used the HCCA access mechanism described in the IEEE 802.11e amendment. The results confirmed the validity of the designed Petri nets and showed that Wi-Bio, in addition to presenting a superior performance compared to HCCA on most items analyzed, was also able to promote efficient integration between the data generated by medical sensors and user applications on the same wireless network

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the performanee analysis of traffie retransmission algorithms pro¬posed to the HCCA medium aeeess meehanism of IEEE 802.11 e standard applied to industrial environmen1. Due to the nature of this kind of environment, whieh has eleetro¬magnetic interferenee, and the wireless medium of IEEE 802.11 standard, suseeptible to such interferenee, plus the lack of retransmission meehanisms, refers to an impraetieable situation to ensure quality of service for real-time traffic, to whieh the IEEE 802.11 e stan¬dard is proposed and this environment requires. Thus, to solve this problem, this paper proposes a new approach that involves the ereation and evaluation of retransmission al-gorithms in order to ensure a levei of robustness, reliability and quality of serviee to the wireless communication in such environments. Thus, according to this approaeh, if there is a transmission error, the traffie scheduler is able to manage retransmissions to reeo¬ver data 10s1. The evaluation of the proposed approaeh is performed through simulations, where the retransmission algorithms are applied to different seenarios, whieh are abstrae¬tions of an industrial environment, and the results are obtained by using an own-developed network simulator and compared with eaeh other to assess whieh of the algorithms has better performanee in a pre-defined applieation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It s notorious the advance of computer networks in recent decades, whether in relation to transmission rates, the number of interconnected devices or the existing applications. In parallel, it s also visible this progress in various sectors of the automation, such as: industrial, commercial and residential. In one of its branches, we find the hospital networks, which can make the use of a range of services, ranging from the simple registration of patients to a surgery by a robot under the supervision of a physician. In the context of both worlds, appear the applications in Telemedicine and Telehealth, which work with the transfer in real time of high resolution images, sound, video and patient data. Then comes a problem, since the computer networks, originally developed for the transfer of less complex data, is now being used by a service that involves high transfer rates and needs requirements for quality of service (QoS) offered by the network . Thus, this work aims to do the analysis and comparison of performance of a network when subjected to this type of application, for two different situations: the first without the use of QoS policies, and the second with the application of such policies, using as scenario for testing, the Metropolitan Health Network of the Federal University of Rio Grande do Norte (UFRN)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiphase flows in ducts can adopt several morphologies depending on the mass fluxes and the fluids properties. Annular flow is one of the most frequently encountered flow patterns in industrial applications. For gas liquid systems, it consists of a liquid film flowing adjacent to the wall and a gas core flowing in the center of the duct. This work presents a numerical study of this flow pattern in gas liquid systems in vertical ducts. For this, a solution algorithm was developed and implemented in FORTRAN 90 to numerically solve the governing transport equations. The mass and momentum conservation equations are solved simultaneously from the wall to the center of the duct, using the Finite Volumes Technique. Momentum conservation in the gas liquid interface is enforced using an equivalent effective viscosity, which also allows for the solution of both velocity fields in a single system of equations. In this way, the velocity distributions across the gas core and the liquid film are obtained iteratively, together with the global pressure gradient and the liquid film thickness. Convergence criteria are based upon satisfaction of mass balance within the liquid film and the gas core. For system closure, two different approaches are presented for the calculation of the radial turbulent viscosity distribution within the liquid film and the gas core. The first one combines a k- Ɛ one-equation model and a low Reynolds k-Ɛ model. The second one uses a low Reynolds k- Ɛ model to compute the eddy viscosity profile from the center of the duct right to the wall. Appropriate interfacial values for k e Ɛ are proposed, based on concepts and ideas previously used, with success, in stratified gas liquid flow. The proposed approaches are compared with an algebraic model found in the literature, specifically devised for annular gas liquid flow, using available experimental results. This also serves as a validation of the solution algorithm