143 resultados para Axiomatização dos Reais


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This master´s thesis presents a reliability study conducted among onshore oil fields in the Potiguar Basin (RN/CE) of Petrobras company, Brazil. The main study objective was to build a regression model to predict the risk of failures that impede production wells to function properly using the information of explanatory variables related to wells such as the elevation method, the amount of water produced in the well (BSW), the ratio gas-oil (RGO), the depth of the production bomb, the operational unit of the oil field, among others. The study was based on a retrospective sample of 603 oil columns from all that were functioning between 2000 and 2006. Statistical hypothesis tests under a Weibull regression model fitted to the failure data allowed the selection of some significant predictors in the set considered to explain the first failure time in the wells

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The changes incurred in the financial system with the introduction of new technologies and new forms of administration of banks has caused impact on the health of workers. These changes, which passed in the process of work, generate a combined share of the risk factors that result in numerous injuries and illnesses among banks, notably between the operators of banks tellers. The Work-Related Musculoskeletal Disordes - WRMD represent a group of occupational diseases always present among these workers. Because of its high incidence and the amount of financial resour envolved to manage the problem has been the object of constant study. This paper aims to analyze the bank teller activity; search the occurrence of WRMD in the activity, identifying the factors determining the occurrence of WRMD in the activity and determine the real number of touchs on a keyboard made by the operator and propose solutions that influence the reduction of illness in the workplace of the bank teller. Methodological tools of ergonomics are used to provide a broad knowledge of aspects of work that have been studied and influential in the generation of occupational diseases studied. It was found that activity put workers to serious risk of occupational diseases. As the main contributory factors and determinants for this illness: the requirements and control the numbers daily endorsements; evaluation system based on performance targets for productivity; management system at time of service to customers; work with stressful factors (broken box); excess of time worked; furniture of workstations with ergonomic inadequacies and policy for the prevention of occupational diseases inefficient. They have also noted cases of illness for DORT workers without fulfilling the legal requirement of the issuance of the communication of labour accident and without the removal of the employee of the workplace

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a contemporary tendency, it is been evidenced that the environmental changes theme, already admitted as a concernment to international economical and political reality, is also gaining repercussion on industrial and business sector. Firms are implementing actions on trial to minimize their own greenhouse gases (GHG) emissions impacts. However, the great majority of those actions of Corporative Social-Environmental Responsibility (CSR) are referred only to direct emissions of the main production systems. Direct emissions are those derived of an isolate process, without considering the upstream and downstream processes emissions, which respond for the majority of emissions originated because of respective firm‟s production system existence. Because the greenhouse effect occurs globally and the GHG emissions contribute to the environmental changes independently of their origin, it must be taken into account the whole productive life cycle of products and systems, since the energy invested on resources extraction and necessary materials to the final disposal. To do so, it must be investigated all relevant steps of a product/production system life cycle, tracking all activities which emit greenhouse gases, directly or indirectly. This amount of emissions consists in the firm‟s Carbon Footprint. This research purpose is to defend the Carbon Footprint relevance and its adoption viability to be used as an Environmental Indicator on measurement/assessment of CSR. It has been realized a study case on Petrobras‟s seat unity at Natal-Brazil, assessing part of its Carbon Footprint. It has been used the software GEMIS 4.6 to do the emissions quantifying. The items measured were the direct emissions of the own unity vehicles and indirect emissions of offset paper (A4), energy and disposable plastic cups consumed. To 2009, these emissions were 3.811,94 tCO2eq. We may conclude that Carbon Footprint quantification is indispensable to the knowledge of real emissions caused by a productive process existence, must serving as basis to CSR decisions about the environmental changes reversion challenge

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work searches to offer a model to improve spare parts stock management for companies of urban passenger transport by bus, with the consequent progress in their maintenance management. Also known as MRO items (Maintenance, Repair and Operations), these spare parts, according their consumption and demand features, cost, criticity to operation, lead-time, quantity of suppliers, among other parameters, shouldn´t have managed their inventory like normal production items (work in process e final products), that because their features, are managed by more predictable models based, for example, in economic order quantity. In the case specifically of companies of urban passenger transport by bus, items MRO have significant importance in their assets and a bad management of these inventories can cause serious losses to company, leading it even bankrupticy business, in more severe situations which missing spare part provokes vehicles shutdown indefinitely. Given slight attention to the issue, which translates in little literature available about it when compared to that literature about normal items stocks, and due the fact that MRO items be critical to bus urban transport of passengers companies´, it is necessary, so, deepen in this theme searching to give technical and scientific subsidies to companies that work, in many times, empirically, with these so decisive inputs to their business. As a typical portfolio problem, in which there are n items, separated into critical and noncritical, while competing for the same resource, it was developed a new algorithm to aid in a better inventory management of spare parts used only in corrective maintenance (whose failures are unpredictable and random), by analyzing the cost-benefit ratio, which compares the level of service versus cost of each item. The model was tested in a company of urban passenger transport by bus from the city of Natal, who anonymously provided their real data to application in this work

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The progresses of the Internet and telecommunications have been changing the concepts of Information Technology IT, especially with regard to outsourcing services, where organizations seek cost-cutting and a better focus on the business. Along with the development of that outsourcing, a new model named Cloud Computing (CC) evolved. It proposes to migrate to the Internet both data processing and information storing. Among the key points of Cloud Computing are included cost-cutting, benefits, risks and the IT paradigms changes. Nonetheless, the adoption of that model brings forth some difficulties to decision-making, by IT managers, mainly with regard to which solutions may go to the cloud, and which service providers are more appropriate to the Organization s reality. The research has as its overall aim to apply the AHP Method (Analytic Hierarchic Process) to decision-making in Cloud Computing. There to, the utilized methodology was the exploratory kind and a study of case applied to a nationwide organization (Federation of Industries of RN). The data collection was performed through two structured questionnaires answered electronically by IT technicians, and the company s Board of Directors. The analysis of the data was carried out in a qualitative and comparative way, and we utilized the software to AHP method called Web-Hipre. The results we obtained found the importance of applying the AHP method in decision-making towards the adoption of Cloud Computing, mainly because on the occasion the research was carried out the studied company already showed interest and necessity in adopting CC, considering the internal problems with infrastructure and availability of information that the company faces nowadays. The organization sought to adopt CC, however, it had doubt regarding the cloud model and which service provider would better meet their real necessities. The application of the AHP, then, worked as a guiding tool to the choice of the best alternative, which points out the Hybrid Cloud as the ideal choice to start off in Cloud Computing. Considering the following aspects: the layer of Infrastructure as a Service IaaS (Processing and Storage) must stay partly on the Public Cloud and partly in the Private Cloud; the layer of Platform as a Service PaaS (Software Developing and Testing) had preference for the Private Cloud, and the layer of Software as a Service - SaaS (Emails/Applications) divided into emails to the Public Cloud and applications to the Private Cloud. The research also identified the important factors to hiring a Cloud Computing provider

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study aims to analyse, in different levels of demand, what is the best layout strategy to adopt for the small metallic shipbuilding. To achieve this purpose, three simulation models are developed for analyze these production strategies under the positional, cellular and linear layouts. By the use of a simulation tool for compare the scenarios, Chwif and Medina (2010) and Law (2009)´s methodologies were adapted that includes three phases: conception, implementation and analysis. In conception real systems were represented by process mapping according to time, material resources and human resources variables required for each step of the production process. All of this information has been transformed in the cost variable. Data were collected from three different production systems, two located in Natal RN with cellular and positional layouts and one located in Belém-PA with linear layout. In the implementation phase, the conceptual models were converted in computacional models through the tool Rockwell Software Arena ® 13.5 and then validated. In the analysis stage the production of 960 ships in a year vessels were simulated for each layout noting that, for a production of until 80 units positional layout is the most recommended, between 81 and 288 units the cellular layout and more than 289 units the linear layout

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present research aims at contributing to the area of detection and diagnosis of failure through the proposal of a new system architecture of detection and isolation of failures (FDI, Fault Detection and Isolation). The proposed architecture presents innovations related to the way the physical values monitored are linked to the FDI system and, as a consequence, the way the failures are detected, isolated and classified. A search for mathematical tools able to satisfy the objectives of the proposed architecture has pointed at the use of the Kalman Filter and its derivatives EKF (Extended Kalman Filter) and UKF (Unscented Kalman Filter). The use of the first one is efficient when the monitored process presents a linear relation among its physical values to be monitored and its out-put. The other two are proficient in case this dynamics is no-linear. After that, a short comparative of features and abilities in the context of failure detection concludes that the UFK system is a better alternative than the EKF one to compose the architecture of the FDI system proposed in case of processes of no-linear dynamics. The results shown in the end of the research refer to the linear and no-linear industrial processes. The efficiency of the proposed architecture may be observed since it has been applied to simulated and real processes. To conclude, the contributions of this thesis are found in the end of the text

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Predictive Controller has been receiving plenty attention in the last decades, because the need to understand, to analyze, to predict and to control real systems has been quickly growing with the technological and industrial progress. The objective of this thesis is to present a contribution for the development and implementation of Nonlinear Predictive Controllers based on Hammerstein model, as well as to its make properties evaluation. In this case, in the Nonlinear Predictive Controller development the time-step linearization method is used and a compensation term is introduced in order to improve the controller performance. The main motivation of this thesis is the study and stability guarantee for the Nonlinear Predictive Controller based on Hammerstein model. In this case, was used the concepts of sections and Popov Theorem. Simulation results with literature models shows that the proposed approaches are able to control with good performance and to guarantee the systems stability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This master dissertation presents the development of a fault detection and isolation system based in neural network. The system is composed of two parts: an identification subsystem and a classification subsystem. Both of the subsystems use neural network techniques with multilayer perceptron training algorithm. Two approaches for identifica-tion stage were analyzed. The fault classifier uses only residue signals from the identification subsystem. To validate the proposal we have done simulation and real experiments in a level system with two water reservoirs. Several faults were generated above this plant and the proposed fault detection system presented very acceptable behavior. In the end of this work we highlight the main difficulties found in real tests that do not exist when it works only with simulation environments

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Breast cancer, despite being one of the leading causes of death among women worldwide is a disease that can be cured if diagnosed early. One of the main techniques used in the detection of breast cancer is the Fine Needle Aspirate FNA (aspiration puncture by thin needle) which, depending on the clinical case, requires the analysis of several medical specialists for the diagnosis development. However, such diagnosis and second opinions have been hampered by geographical dispersion of physicians and/or the difficulty in reconciling time to undertake work together. Within this reality, this PhD thesis uses computational intelligence in medical decision-making support for remote diagnosis. For that purpose, it presents a fuzzy method to assist the diagnosis of breast cancer, able to process and sort data extracted from breast tissue obtained by FNA. This method is integrated into a virtual environment for collaborative remote diagnosis, whose model was developed providing for the incorporation of prerequisite Modules for Pre Diagnosis to support medical decision. On the fuzzy Method Development, the process of knowledge acquisition was carried out by extraction and analysis of numerical data in gold standard data base and by interviews and discussions with medical experts. The method has been tested and validated with real cases and, according to the sensitivity and specificity achieved (correct diagnosis of tumors, malignant and benign respectively), the results obtained were satisfactory, considering the opinions of doctors and the quality standards for diagnosis of breast cancer and comparing them with other studies involving breast cancer diagnosis by FNA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation