921 resultados para Avaliação de banco de dados
Resumo:
Foram desenvolvidas equações de predição de consumo de pasto de capim-elefante (Pennisetum purpureum, Schumack) por vacas mestiças Holandês x Zebu em lactação, utilizando-se procedimentos de stepwise em regressões múltiplas, aplicados a um banco de dados de experimentos conduzidos ao longo de três anos na Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA) Gado de Leite (Coronel Pacheco, MG). As variáveis independentes disponíveis foram relacionadas a características inerentes às vacas (dias em lactação; teores de proteína, gordura e extrato seco total e produções destes componentes no leite; produção de leite in natura ou corrigida para 4% de gordura; ordem de lactação; peso vivo atual; peso vivo ao parto e grau de sangue Holandês x Zebu); ao manejo (dias de pastejo; disponibilidade de forragem e período de descanso da pastagem); ao ambiente (estação do ano e precipitação pluviométrica) e à alimentação (digestibilidade in vitro e parâmetros da composição química do pasto de capim-elefante e da cana-de-açúcar - Saccharum officinarum (L.) corrigida com 1% de uréia; consumos de suplemento volumoso (cana corrigida com uréia) e concentrado; concentrações fecais de proteína bruta e de fibras em detergente neutro e ácido). Efeitos linear e quadrático e transformações logarítmicas foram adicionalmente incluídos no banco de dados. Foram obtidas equações de predição de consumo de pasto de capim-elefante (expresso em kg/vaca/dia ou % do peso vivo) com coeficientes de determinação de 65,2 a 67,0%. As principais variáveis independentes incluídas nas equações foram o consumo do suplemento volumoso usado na estação seca do ano (cana corrigida com uréia); a digestibilidade in vitro do pasto de capim-elefante; a precipitação pluviométrica; a produção de leite corrigida para 4% de gordura; o peso vivo atual ou, em alternativa a este, o valor da pesagem realizada após o parto da vaca; além do consumo de suplemento concentrado, que evidenciou um efeito de substituição àquele do pasto de capim-elefante.
Resumo:
A análise fatorial de componentes principais (CP) foi usada no exame do relacionamento entre variáveis de um banco de dados da Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA) Gado de Leite. As variáveis disponíveis foram relacionadas às vacas (dias em lactação, teores lácteos de gordura e extrato seco total, produção de leite, ordem de lactação, peso vivo e grau de sangue), ao manejo (dia de pastejo, disponibilidade e períodos de descanso da pastagem), ao ambiente (estação do ano, precipitação pluviométrica) e ao alimento (consumo de nutrientes do concentrado e da cana × uréia, consumo de MS de pastagem de capim-elefante, composição química e digestibilidade in vitro da pastagem e concentração fecal de PB, FDN e FDA). O primeiro CP (33,7% da inércia dos dados) representou o uso da suplementação volumosa (cana × uréia) da pastagem em resposta à redução sazonal da disponibilidade e do consumo de capim-elefante. O segundo CP (15,3% da inércia) foi relacionado ao consumo de nutrientes do concentrado. O terceiro CP (8,5% da inércia) representou efeitos do manejo sobre a composição química da pastagem. A interpretação gráfica dos resultados favoreceu a percepção mais dinâmica da intensidade da associação e do antagonismo entre as variáveis contextualizadas no estudo.
Resumo:
The present work is grounded basically on the use of the Basic Tools for the Statistic Process Control SPC, with the intent to detect non-conformities on a given productive process. It consists on a case study accomplished at a Hemocenter in Natal (Rio Grande do Norte). In this study it is shown that, the Statistic Process Control Technique, which was used as a tool, is useful to identify on-conformities on the volume of hemocomponents. The gathering of the used data was performed by means of document analysis, direct observations and database queries. The results achieved from the study show that the analyzed products, even though when they have presented, in some cases, points out of control, they satisfied the ANVISA standards. Finally, suggestions for further improvement of the final product and guidance for future employment of CEP, also extended to other lines of production, are presented
Resumo:
In order to guarantee database consistency, a database system should synchronize operations of concurrent transactions. The database component responsible for such synchronization is the scheduler. A scheduler synchronizes operations belonging to different transactions by means of concurrency control protocols. Concurrency control protocols may present different behaviors: in general, a scheduler behavior can be classified as aggressive or conservative. This paper presents the Intelligent Transaction Scheduler (ITS), which has the ability to synchronize the execution of concurrent transactions in an adaptive manner. This scheduler adapts its behavior (aggressive or conservative), according to the characteristics of the computing environment in which it is inserted, using an expert system based on fuzzy logic. The ITS can implement different correctness criteria, such as conventional (syntactic) serializability and semantic serializability. In order to evaluate the performance of the ITS in relation to others schedulers with exclusively aggressive or conservative behavior, it was applied in a dynamic environment, such as a Mobile Database Community (MDBC). An MDBC simulator was developed and many sets of tests were run. The experimentation results, presented herein, prove the efficiency of the ITS in synchronizing transactions in a dynamic environment
Resumo:
The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries
Resumo:
Objetivou-se com este estudo comparar a seleção com base no ganho médio diário de peso na pré-desmama (GMD) e no número de dias para ganhar 160 kg nessa fase (D160), com e sem correção para efeitos de grupos de contemporâneos (GC), em bovinos da raça Guzerá. Utilizou-se o banco de dados de desenvolvimento ponderal da Associação Brasileira de Criadores de Zebu (ABCZ) para esta raça. A obtenção dos parâmetros e valores genéticos foi feita pelo método da máxima verossimilhança restrita utilizando-se modelo unicaracterístico com equações de modelos mistos. O modelo utilizado foi composto pelo efeito fixo de grupo genético e pelos efeitos aleatórios, genético aditivo direto e de ambiente permanente, além do erro residual. A média para D160 foi de 270,5 dias e para GMD, 642,3 g. As correlações de Spearman entre ganho médio diário e a precocidade em dias para ganhar 160 kg na pré-desmama (PD160), e GMD e PD160c (PD160 corrigido para o efeito de grupo de contemporâneo), foram iguais a 0,91 e 0,94, respectivamente. A seleção para PD160 favorece touros que produzem progênie com desempenho superior e menos variável e a padronização deste critério com base no grupo de contemporâneos melhorou sua eficiência. A classificação dos touros varia de acordo com o critério de seleção utilizado, GMD ou PD160, principalmente nos extremos, onde ocorrem seleção e descarte de reprodutores.
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
The need to implement a software architecture that promotes the development of a SCADA supervisory system for monitoring industrial processes simulated with the flexibility of adding intelligent modules and devices such as CLP, according to the specifications of the problem, it was the motivation for this work. In the present study, we developed an intelligent supervisory system on a simulation of a distillation column modeled with Unisim. Furthermore, OLE Automation was used as communication between the supervisory and simulation software, which, with the use of the database, promoted an architecture both scalable and easy to maintain. Moreover, intelligent modules have been developed for preprocessing, data characteristics extraction, and variables inference. These modules were fundamentally based on the Encog software
Resumo:
Um total de 19.770 pesos corporais de bovinos Guzerá, do nascimento aos 365 dias de idade, pertencentes ao banco de dados da Associação Brasileira dos Criadores de Zebu (ABCZ) foi analisado com os objetivos de comparar diferentes estruturas de variâncias residuais, considerando 1, 18, 28 e 53 classes residuais e funções de variância de ordens quadrática a quíntica; e estimar funções de co-variância de diferentes ordens para os efeitos genético aditivo direto, genético materno, de ambiente permanente de animal e de mãe e parâmetros genéticos para os pesos corporais usando modelos de regressão aleatória. Os efeitos aleatórios foram modelados por regressões polinomiais em escala de Legendre com ordens variando de linear a quártica. Os modelos foram comparados pelo teste de razão de verossimilhança e pelos critérios de Informação de Akaike e de Informação Bayesiano de Schwarz. O modelo com 18 classes heterogêneas foi o que melhor se ajustou às variâncias residuais, de acordo com os testes estatísticos, porém, o modelo com função de variância de quinta ordem também mostrou-se apropriado. Os valores de herdabilidade direta estimados foram maiores que os encontrados na literatura, variando de 0,04 a 0,53, mas seguiram a mesma tendência dos estimados pelas análises unicaracterísticas. A seleção para peso em qualquer idade melhoraria o peso em todas as idades no intervalo estudado.
Resumo:
A 2.5D ray-tracing propagation model is proposed to predict radio loss in indoor environment. Specifically, we opted for the Shooting and Bouncing Rays (SBR) method, together with the Geometrieal Theory of Diffrartion (GTD). Besides the line-of-sight propagation (LOS), we consider that the radio waves may experience reflection, refraction, and diffraction (NLOS). In the Shooting and Bouncing Rays (SBR) method, the transmitter antenna launches a bundle of rays that may or may not reach the receiver. Considering the transmitting antenna as a point, the rays will start to launch from this position and can reach the receiver either directly or after reflections, refractions, diffractions, or even after any combination of the previous effects. To model the environment, a database is built to record geometrical characteristics and information on the constituent materials of the scenario. The database works independently of the simulation program, allowing robustness and flexibility to model other seenarios. Each propagation mechanism is treated separately. In line-of-sight propagation, the main contribution to the received signal comes from the direct ray, while reflected, refracted, and diffracted signal dominate when the line-of-sight is blocked. For this case, the transmitted signal reaches the receiver through more than one path, resulting in a multipath fading. The transmitting channel of a mobile system is simulated by moving either the transmitter or the receiver around the environment. The validity of the method is verified through simulations and measurements. The computed path losses are compared with the measured values at 1.8 GHz ftequency. The results were obtained for the main corridor and room classes adjacent to it. A reasonable agreement is observed. The numerical predictions are also compared with published data at 900 MHz and 2.44 GHz frequencies showing good convergence
Resumo:
In the two last decades of the past century, following the consolidation of the Internet as the world-wide computer network, applications generating more robust data flows started to appear. The increasing use of videoconferencing stimulated the creation of a new form of point-to-multipoint transmission called IP Multicast. All companies working in the area of software and the hardware development for network videoconferencing have adjusted their products as well as developed new solutionsfor the use of multicast. However the configuration of such different solutions is not easy done, moreover when changes in the operational system are also requirede. Besides, the existing free tools have limited functions, and the current comercial solutions are heavily dependent on specific platforms. Along with the maturity of IP Multicast technology and with its inclusion in all the current operational systems, the object-oriented programming languages had developed classes able to handle multicast traflic. So, with the help of Java APIs for network, data bases and hipertext, it became possible to the develop an Integrated Environment able to handle multicast traffic, which is the major objective of this work. This document describes the implementation of the above mentioned environment, which provides many functions to use and manage multicast traffic, functions which existed only in a limited way and just in few tools, normally the comercial ones. This environment is useful to different kinds of users, so that it can be used by common users, who want to join multimedia Internet sessions, as well as more advenced users such engineers and network administrators who may need to monitor and handle multicast traffic
Resumo:
The sharing of knowledge and integration of data is one of the biggest challenges in health and essential contribution to improve the quality of health care. Since the same person receives care in various health facilities throughout his/her live, that information is distributed in different information systems which run on platforms of heterogeneous hardware and software. This paper proposes a System of Health Information Based on Ontologies (SISOnt) for knowledge sharing and integration of data on health, which allows to infer new information from the heterogeneous databases and knowledge base. For this purpose it was created three ontologies represented by the patterns and concepts proposed by the Semantic Web. The first ontology provides a representation of the concepts of diseases Secretariat of Health Surveillance (SVS) and the others are related to the representation of the concepts of databases of Health Information Systems (SIS), specifically the Information System of Notification of Diseases (SINAN) and the Information System on Mortality (SIM)
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
Este trabalho teve o objetivo de avaliar a evolução do uso da terra no município de Botucatu - SP, no período de três anos, considerando-se seis tipos de cobertura vegetal (cana-de-açúcar, reflorestamento, floresta nativa, pastagem, cítrus e outros), tendo como base as imagens de satélite Landsat 5, bandas 3; 4 e 5, órbita 220, ponto 76, quadrante A, passagem de 8 de junho de 1999. O Sistema de Informações Geográficas - IDRISI for Windows 3.2, foi utilizado para as análises. Os resultados mostraram que esse programa foi eficiente para auxiliar na identificação e mapeamento das áreas com uso da terra, facilitando o processamento dos dados. As imagens de satélite TM/LANDSAT 5 forneceram um excelente banco de dados para a classificação supervisionada. O município não vem sendo preservado ambientalmente, pois apresenta-se coberto com menos de 20% de florestas nativas, mínimo exigido por lei. As áreas de pastagem, principal componente da paisagem do município, confirmam a vocação da região para a pecuária.
Resumo:
In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS