73 resultados para ESTIMATIVA DE TEMPO
Resumo:
Quasi-experimental study, with prospective data, comparative with quantitative approach, performed in a reference hospital, aiming to identify the effectiveness of the Numerical Rating Scale (NRS) and McGill Pain Questionnaire, used simultaneously, to evaluate a group of patients with oncologic pain (Experimental Group); to identify the effectiveness of the Numerical Rating Scale (NRS) to evaluate a group of patients with oncologic pain (Control Group); to identify the resolution of pain according to prescribed medication, considering the result of the rating scales, and to compare it between the two groups of patients in the study. The population consisted of 100 patients, with both the experimental and control groups being composed of 50 people, with data collected from February to April 2010. The results show that in the experimental group, 32% of the patients were aged 60 to 69, 80% were female; 30% had a primary tumor in the breast, 58% had metastasis, and on 70% the disease was localized. In the first pain evaluation, 26% identified it as light; 46%, moderate; and 28%, severe; with an average of 5.50. In the second pain evaluation, 2% reported no pain; 70%, light; 26%, moderate. and 2%, severe, with an average of 3.30. On those with moderate pain, 60% used non-opioid medicine, 25% under severe pain were medicated with non-opioids and 41.67% with weak opioids. Regarding the Pain Management Index (PMI), 44.0% were rated as "-1". In the control group, 28% were aged 40 to 49, and 54% were male; 20% had primary tumor in the breast and genital-urinary system, consecutively; 56% presented metastasis; on 64% the disease was localized. In the first pain evaluation, 14% considered it light; 42%, moderate; and 44%, severe; with an average of 6.26. In the second pain evaluation, 18% did not signal pain; on 38% pain was light; 40%, moderate; and 4%, severe; with an average of 3.0. Regarding medicine therapy, 71.43% with moderate pain used non-opioids, 22.73% with severe pain used non-opioids and 27.27% weak opioids. Considering PMI, 42% were rated "-1"; and 42%, rated "0". We conclude that, despite the importance of pain as the 5th vital sign, it is still under-identified and under-treated by professionals. Nevertheless, studied oncologic patients had a tendency to report pain more easily when evaluated with the NRS instrument than with the combined use of NRS and MPQ. We believe, however, that the combination of these two instruments represents a more effective evaluation of pain, as it allows comprehension of its quantitative and qualitative aspects. We recommend, however, the replication of this study on a larger population, for a longer span of time, and consequently generating more evaluations, so this can confirm or deny the hypothesis that NRS and MPQ can, together, better evaluate pain on the oncologic patient
Resumo:
This work presents a study in quality of health care, with focus on consulting appointment. The main purpose is to define a statistical model and propose a quality grade of the consulting appointment time. The time considered is that from the day the patient get the appointment done to the day the consulting is realized. It is used reliability techniques and functions that has as main characteristic the analysis of data regarding the time of occurrence certain event. It is gathered a random sample of 1743 patients in the appointment system of a University Hospital - the Hospital Universitário Onofre Lopes - of the Federal University of Rio Grande do Norte, Brazil. The sample is randomly stratified in terms on clinical specialty. The data were analyzed against the parametric methods of the reliability statistics and the adjustment of the regression model resulted in the Weibull distribution being best fit to data. The quality grade proposed is based in the PAHO criteria for a consulting appointment and result that no clinic got the PAHO quality grade. The quality grade proposed could be used to define priority for improvement and as criteria to quality control
Resumo:
The knowledge management has received major attention from product designers because many of the activities within this process have to be creative and, therefore, they depend basically on the knowledge of the people who are involved in the process. Moreover, Product Development Process (PDP) is one of the activities in which knowledge management manifests in the most critical form once it had the intense application of the knowledge. As a consequence, this thesis analyzes the knowledge management aiming to improve the PDP and it also proposes a theoretical model of knowledge management. This model uses five steps (creation, maintenance, dissemination, utilization and discard) through the verification of the occurrence of four types of knowledge conversion (socialization, externalization, combination and internalization) that it will improve the knowledge management in this process. The intellectual capital in Small and Medium Enterprises (SMEs) managed efficiently and with the participation of all employees has become the mechanism of the creation and transference processes of knowledge, supporting and, consequently, improving the PDP. The expected results are an effective and efficient application of the proposed model for the creation of the knowledge base within an organization (organizational memory) aiming a better performance of the PDP. In this way, it was carried out an extensive analysis of the knowledge management (instrument of qualitative and subjective evaluation) within the Design department of a Brazilian company (SEBRAE/RN). This analysis aimed to know the state-of-the-art of the Design department regarding the use of knowledge management. This step was important in order to evaluate in the level of the evolution of the department related to the practical use of knowledge management before implementing the proposed theoretical model and its methodology. At the end of this work, based on the results of the diagnosis, a knowledge management system is suggested to facilitate the knowledge sharing within the organization, in order words, the Design department
Resumo:
This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm
Resumo:
The present work presents an algorithm proposal, which aims for controlling and improving idle time to be applied in oil production wells equipped with beam pump. The algorithm was totally designed based on existing papers and data acquired from two Potiguar Basin pilot wells. Oil engineering concepts such as submergence, pump off, Basic Sediments and Water (BSW), Inflow Performance Relationship (IPR), reservo ir pressure, inflow pressure, among others, were included into the algorithm through a mathematical treatment developed from a typical well and then extended to the general cases. The optimization will increase the well production potential maximum utilization having the smallest number of pumping unit cycles directly reflecting on operational cost and electricity consumption reduction
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The incorporate of industrial automation in the medical are requires mechanisms to safety and efficient establishment of communication between biomedical devices. One solution to this problem is the MP-HA (Multicycles Protocol to Hospital Automation) that down a segmented network by beds coordinated by an element called Service Provider. The goal of this work is to model this Service Provider and to do performance analysis of the activities executed by in establishment and maintenance of hospital networks
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
This work presents a modelling and identification method for a wheeled mobile robot, including the actuator dynamics. Instead of the classic modelling approach, where the robot position coordinates (x,y) are utilized as state variables (resulting in a non linear model), the proposed discrete model is based on the travelled distance increment Delta_l. Thus, the resulting model is linear and time invariant and it can be identified through classical methods such as Recursive Least Mean Squares. This approach has a problem: Delta_l can not be directly measured. In this paper, this problem is solved using an estimate of Delta_l based on a second order polynomial approximation. Experimental data were colected and the proposed method was used to identify the model of a real robot
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles
Resumo:
To obtain a process stability and a quality weld bead it is necessary an adequate parameters set: base current and time, pulse current and pulse time, because these influence the mode of metal transfer and the weld quality in the MIG-P, sometimes requiring special sources with synergistic modes with external control for this stability. This work aims to analyze and compare the effects of pulse parameters and droplet size in arc stability in MIG-P, four packets of pulse parameters were analysed: Ip = 160 A, tp = 5.7 ms; Ip = 300 A and tp = 2 ms, Ip = 350 A, tp = 1.2 ms and Ip = 350 A, tp = 0.8 ms. Each was analyzed with three different drop diameters: drop with the same diameter of the wire electrode; droplet diameter larger drop smaller than the diameter of the wire electrode. For purposes of comparison the same was determined relation between the average current and welding speed was determined generating a constant (Im / Vs = K) for all parameters. Welding in flat plate by simple deposition for the MIG-P with a distance beak contact number (DBCP) constant was perfomed subsequently making up welding in flat plate by simple deposition with an inclination of 10 degrees to vary the DBCP, where by assessment on how the MIG-P behaved in such a situation was possible, in addition to evaluating the MIG-P with adaptive control, in order to maintain a constant arc stability. Also high speed recording synchronized with acquiring current x voltage (oscillogram) was executed for better interpretation of the transfer mechanism and better evaluation in regard to the study of the stability of the process. It is concluded that parameters 3 and 4 exhibited greater versatility; diameters drop equal to or slightly less than the diameter of the wire exhibited better stability due to their higher frequency of detachment, and the detachment of the drop base does not harm the maintenance the height of the arc