92 resultados para Estudos de validação
Resumo:
A pesquisa teve como objetivo elaborar e validar um instrumento para sistematizar a assistência de enfermagem à puérpera no âmbito da atenção básica. O documento foi construído com base na Teoria das Necessidades Humanas Básicas de Horta, na Padronização de um Conjunto Internacional de Dados Essenciais em Enfermagem e na Nomenclatura de diagnósticos e intervenções de enfermagem desenvolvida a partir dos resultados da Classificação Internacional para as Práticas de Enfermagem. Trata-se de um estudo do tipo metodológico desenvolvido em cinco etapas: identificação dos indicadores empíricos relativos à puérpera mediante revisão integrativa da literatura; avaliação dos indicadores empíricos e sua relação com as necessidades humanas básicas por grupo focal com cinco enfermeiras especialistas; estruturação do instrumento mediante a categorização dos indicadores; validação de forma e conteúdo do instrumento pelos especialistas, por meio da técnica Delphi; e aplicação e desenvolvimento das afirmativas de diagnóstico e intervenções de enfermagem. A coleta de dados da primeira etapa ocorreu nos meses de janeiro a março de 2013 nas bases de dados Scopus, Cinahl, Pubmed, Cochrane, e no periódico Journal of Midwifery and Women s Health. A segunda, terceira e quarta etapas se realizaram nos meses de maio a outubro de 2013. Participaram doze e sete especialistas na primeira e segunda rodada de avaliação respectivamente. A seleção dos especialistas ocorreu pela Plataforma Lattes mediante os seguintes critérios de inclusão: ser enfermeiro (a) docente e especialista em enfermagem obstétrica. A consulta a estes profissionais se deu via email e, ao aceitarem participar da pesquisa, assinaram um Termo de Consentimento Livre e Esclarecido. A pesquisa obteve aprovação da Comissão de Ética em Pesquisa da Universidade Federal do Rio Grande do Norte, sob o protocolo nº 184.241 e Certificado de Apresentação para Apreciação Ética nº 11674112.3.0000.5537. Para análise dos dados da primeira etapa, utilizou-se a estatística descritiva e os resultados apresentados em forma de tabelas. Nesta etapa, identificou-se 97 indicadores empíricos e, quando relacionados com as necessidades humanas básicas, 46 desses encontravam-se nas necessidades psicobiológicas, 51 nas psicossociais e 01 nas necessidades psicoespirituais. Com relação à segunda e terceira etapas, os dados passaram por um processo de categorização e análise pelo Índice de Validade de Conteúdo. Os indicadores obtiveram um índice de validação de 100%. Na parte de avaliação da puérpera, os itens não validados foram excluídos do instrumento. Os demais itens obtiveram índice acima de 70%, sendo, portanto, o instrumento validado. O instrumento para a consulta de enfermagem é constituído de dados de identificação da puérpera, dados de avaliação das necessidades humanas da puérpera e itens do cuidado de enfermagem. Na versão final foram selecionados 73 Diagnósticos de Enfermagem e 155 Intervenções de Enfermagem a partir da categorização dos indicadores empíricos validados na segunda e terceira etapas do estudo. Com a conclusão do estudo, o enfermeiro disporá de um instrumento para sistematização da assistência à puérpera na atenção básica. Além disso, o documento servirá como ferramenta para o ensino e a pesquisa em enfermagem obstétrica
Resumo:
The importance of the airport sector in the development of a country refers to the need for studies on management of airports, to aid the process of decision making. In Brazil, growth in passenger demand is why investments in order to balance the capacity of an airport with air demand. Thus, the study aims to develop a model for Dynamic Systems able to assist airport management in Brazilian sizing subsystems an airport (Passenger Terminal, Runway and Patio). The methodology of this work consists in the steps of defining the problem, formulating the hypothesis dynamic building simulation model, and validation experiments. Finally, we examined the status of each subsystem in thirteen Brazilian airports in scenarios current, most likely and optimistic for air passenger demand
Resumo:
Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed
Resumo:
This work presents a set of intelligent algorithms with the purpose of correcting calibration errors in sensors and reducting the periodicity of their calibrations. Such algorithms were designed using Artificial Neural Networks due to its great capacity of learning, adaptation and function approximation. Two approaches willbe shown, the firstone uses Multilayer Perceptron Networks to approximate the many shapes of the calibration curve of a sensor which discalibrates in different time points. This approach requires the knowledge of the sensor s functioning time, but this information is not always available. To overcome this need, another approach using Recurrent Neural Networks was proposed. The Recurrent Neural Networks have a great capacity of learning the dynamics of a system to which it was trained, so they can learn the dynamics of a sensor s discalibration. Knowingthe sensor s functioning time or its discalibration dynamics, it is possible to determine how much a sensor is discalibrated and correct its measured value, providing then, a more exact measurement. The algorithms proposed in this work can be implemented in a Foundation Fieldbus industrial network environment, which has a good capacity of device programming through its function blocks, making it possible to have them applied to the measurement process
Resumo:
The use of Geographic Information Systems (GIS) has becoming very important in fields where detailed and precise study of earth surface features is required. Applications in environmental protection are such an example that requires the use of GIS tools for analysis and decision by managers and enrolled community of protected areas. In this specific field, a challenge that remains is to build a GIS that can be dynamically fed with data, allowing researchers and other agents to recover actual and up to date information. In some cases, data is acquired in several ways and come from different sources. To solve this problem, some tools were implemented that includes a model for spatial data treatment on the Web. The research issues involved start with the feeding and processing of environmental control data collected in-loco as biotic and geological variables and finishes with the presentation of all information on theWeb. For this dynamic processing, it was developed some tools that make MapServer more flexible and dynamic, allowing data uploading by the proper users. Furthermore, it was also developed a module that uses interpolation to aiming spatial data analysis. A complex application that has validated this research is to feed the system with data coming from coral reef regions located in northeast of Brazil. The system was implemented using the best interactivity concept provided by the AJAX model and resulted in a substantial contribution for efficiently accessing information, being an essential mechanism for controlling events in the environmental monitoring
Resumo:
The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals
Resumo:
Global Positioning System, or simply GPS, it is a radionavigation system developed by United States for military applications, but it becames very useful for civilian using. In the last decades Brazil has developed sounding rockets and today many projects to build micro and nanosatellites has appeared. This kind of vehicles named spacecrafts or high dynamic vehicles, can use GPS for its autonome location and trajectories controls. Despite of a huge number of GPS receivers available for civilian applications, they cannot used in high dynamic vehicles due environmental issues (vibrations, temperatures, etc.) or imposed dynamic working limits. Only a few nations have the technology to build GPS receivers for spacecrafts or high dynamic vehicles is available and they imposes rules who difficult the access to this receivers. This project intends to build a GPS receiver, to install them in a payload of a sounding rocket and data collecting to verify its correct operation when at the flight conditions. The inner software to this receiver was available in source code and it was tested in a software development platform named GPS Architect. Many organizations cooperated to support this project: AEB, UFRN, IAE, INPE e CLBI. After many phases: defining working conditions, choice and searching electronic, the making of the printed boards, assembling and assembling tests; the receiver was installed in a VS30 sounding rocket launched at Centro de Lançamento da Barreira do Inferno in Natal/RN. Despite of the fact the locations data from the receiver were collected only the first 70 seconds of flight, this data confirms the correct operation of the receiver by the comparison between its positioning data and the the trajectory data from CLBI s tracking radar named ADOUR
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
Considering the transition from industrial society to information society, we realize that the digital training that is addressed is currently insufficient to navigate within a digitized reality. As proposed to minimize this problem, this paper assesses, validates and develops the software RoboEduc to work with educational robotics with the main differential programming of robotic devices in levels, considering the specifics of reality training . One of the emphases of this work isthe presentation of materials and procedures involving the development, analysis and evolution of this software. For validation of usability tests were performed, based on analysis of these tests was developed version 4.0 of RoboEduc
Resumo:
This document proposes to describe a pilot plant for oil wells equipped with plunger lift. In addition to a small size (21,5 meters) and be on the surface, the plant s well has part of its structure in transparent acrylic, allowing easy visualization of phenomena inherent to the method. The rock formation where the well draws its pilot plant fluids (water and air) is simulated by a machine room where they are located the compressor and water pump for the production of air and water. To keep the flow of air and water with known and controlled values the lines that connect the machine room to the wellhole are equipped with flow sensors and valves. It s developed a supervisory system that allows the user a real-time monitoring of pressures and flow rates involved. From the supervisor is still allowed the user can choose how they will be controlled cycles of the process, whether by time, pressure or manually, and set the values of air flow to the water used in cycles. These values can be defined from a set point or from the percentage of valve opening. Results from tests performed on the plant using the most common forms of control by time and pressure in the coating are showed. Finally, they are confronted with results generated by a simulator configured with the the pilot plant s feature
Resumo:
This work shows a theoretical analysis together with numerical and experimental results of transmission characteristics from the microstrip bandpass filters with different geometries. These filters are built over isotropic dielectric substrates. The numerical analysis is made by specifical commercial softwares, like Ansoft Designer and Agilent Advanced Design System (ADS). In addition to these tools, a Matlab Script was built to analyze the filters through the Finite-Difference Time-Domain (FDTD) method. The filters project focused the development of the first stage of filtering in the ITASAT s Transponder receptor, and its integration with the others systems. Some microstrip filters architectures have been studied, aiming the viability of implementation and suitable practical application for the purposes of the ITASAT Project due to its lowspace occupation in the lower UHF frequencies. The ITASAT project is a Universityexperimental project which will build a satellite to integrate the Brazilian Data Collect System s satellite constellation, with efforts of many Brazilian institutes, like for example AEB (Brazilian Spatial Agency), ITA (Technological Institute of Aeronautics), INPE/CRN (National Institute of Spatial Researches/Northeastern Regional Center) and UFRN (Federal University of Rio Grande do Norte). Comparisons were made between numerical and experimental results of all filters, where good agreements could be noticed, reaching the most of the objectives. Also, post-work improvements were suggested.
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
Multiphase flows in ducts can adopt several morphologies depending on the mass fluxes and the fluids properties. Annular flow is one of the most frequently encountered flow patterns in industrial applications. For gas liquid systems, it consists of a liquid film flowing adjacent to the wall and a gas core flowing in the center of the duct. This work presents a numerical study of this flow pattern in gas liquid systems in vertical ducts. For this, a solution algorithm was developed and implemented in FORTRAN 90 to numerically solve the governing transport equations. The mass and momentum conservation equations are solved simultaneously from the wall to the center of the duct, using the Finite Volumes Technique. Momentum conservation in the gas liquid interface is enforced using an equivalent effective viscosity, which also allows for the solution of both velocity fields in a single system of equations. In this way, the velocity distributions across the gas core and the liquid film are obtained iteratively, together with the global pressure gradient and the liquid film thickness. Convergence criteria are based upon satisfaction of mass balance within the liquid film and the gas core. For system closure, two different approaches are presented for the calculation of the radial turbulent viscosity distribution within the liquid film and the gas core. The first one combines a k- Ɛ one-equation model and a low Reynolds k-Ɛ model. The second one uses a low Reynolds k- Ɛ model to compute the eddy viscosity profile from the center of the duct right to the wall. Appropriate interfacial values for k e Ɛ are proposed, based on concepts and ideas previously used, with success, in stratified gas liquid flow. The proposed approaches are compared with an algebraic model found in the literature, specifically devised for annular gas liquid flow, using available experimental results. This also serves as a validation of the solution algorithm
Resumo:
We present two models of blocks made of composite material obtained from the use of cement, plaster, EPS crushed, shredded tire, mud, sand and water, for the construction of popular housing. Were made metal molds for the manufacture of blocks to be used in the construction of a residence for low-income families. Performed tests of compressive strength of the composite for various formulations that met the specific standard for blocks used in construction. To study the thermal conductivity of the composite for further study of thermal comfort generated in a residence built with the proposed composite. We also determined the mass-specific and water absorption for each formulation studied. Using a home already built with another composite material, made up the closing of a window with the building blocks and found the thermal insulation, measuring external and internal temperatures of the blocks. The blocks had made good thermal insulation of the environment, resulting in differences of up to 12.6°C between the outer and inner faces. It will be shown the feasibility of using composite for the end proposed and chosen the most appropriate wording
Resumo:
Improving the adherence between oilwell metallic casing and cement sheath potentially decrease the number of corrective actions present/y necessary for Northeastern wells submitted to steam injection. In addition to the direct costs involved in the corrective operations, the economic impact of the failure of the primary cementing aIso includes the loss in the production of the well. The adherence between casing and cement is current/y evaluated by a simple shear tests non standardized by the American Petroleum Institute (API). Therefore, the objective of the present is to propose and evaluate a standardized method to assess the adherence of oilwell metallic casing to cement sheath. To that end, a section of a cemented oilwell was simulated and used to test the effect of different parameters on the shear stress of the system. Surface roughness and different cement compositions submitted or not to thermal cycling were evaluated. The results revealed that the test geometry and parameters proposed yielded different values for the shear stress of the system, corresponding to different adherent conditions between metallic casing and cement sheath