80 resultados para Duarte, Adriane Da Silva


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several research lines show that sleep favors memory consolidation and learning. It has been proposed that the cognitive role of sleep is derived from a global scaling of synaptic weights, able to homeostatically restore the ability to learn new things, erasing memories overnight. This phenomenon is typical of slow-wave sleep (SWS) and characterized by non-Hebbian mechanisms, i.e., mechanisms independent of synchronous neuronal activity. Another view holds that sleep also triggers the specific enhancement of synaptic connections, carrying out the embossing of certain mnemonic traces within a lattice of synaptic weights rescaled each night. Such an embossing is understood as the combination of Hebbian and non-Hebbian mechanisms, capable of increasing and decreasing respectively the synaptic weights in complementary circuits, leading to selective memory improvement and a restructuring of synaptic configuration (SC) that can be crucial for the generation of new behaviors ( insights ). The empirical findings indicate that initiation of Hebbian plasticity during sleep occurs in the transition of the SWS to the stage of rapid eye movement (REM), possibly due to the significant differences between the firing rates regimes of the stages and the up-regulation of factors involved in longterm synaptic plasticity. In this study the theories of homeostasis and embossing were compared using an artificial neural network (ANN) fed with action potentials recorded in the hippocampus of rats during the sleep-wake cycle. In the simulation in which the ANN did not apply the long-term plasticity mechanisms during sleep (SWS-transition REM), the synaptic weights distribution was re-scaled inexorably, for its mean value proportional to the input firing rate, erasing the synaptic weights pattern that had been established initially. In contrast, when the long-term plasticity is modeled during the transition SWSREM, an increase of synaptic weights were observed in the range of initial/low values, redistributing effectively the weights in a way to reinforce a subset of synapses over time. The results suggest that a positive regulation coming from the long-term plasticity can completely change the role of sleep: its absence leads to forgetting; its presence leads to a positive mnemonic change

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During a petroleum well production process, It is common the slmultaneous oil and water production, in proportion that can vary from 0% up to values close to 100% of water. Moreover, the production flows can vary a lot, depending on the charaeteristies of eaeh reservoir. Thus being, the meters used in field for the flow and BSW (water in the oil) measurement must work well in wide bands of operation. For the evaluation of the operation of these meters, in the different operation conditions, a Laboratory will be built in UFRN, that has for objective to evaluate in an automatic way the processes of flow and BSW petroleum measurement, considering different operation conditions. The good acting of these meters is fundamental for the accuracy of the measures of the volumes of production liquid and rude of petroleum. For the measurement of this production, the petroleum companies use meters that should indicate the values with tha largast possible accuracy and to respect a series of conditions and minimum requirements, estabelished by the united Entrance ANP/INMETRO 19106/2000. The laboratory of Evafuation of the Processes of Measurement of Flow and BSW to be built will possess an oil tank basically, a tank of water, besides a mixer, a tank auditor, a tank for separation and a tank of residues for discard of fluids, fundamental for the evaluation of the flow metars and BSW. The whole process will be automated through the use of a Programmable Logicat Controller (CLP) and of a supervisory system.This laboratory besides allowing the evaluation of flow meters and BSW used by petroleum companies, it will make possible the development of researches related to the automation. Besides, it will be a collaborating element to the development of the Computer Engineering and Automation Department, that it will propitiate the evolution of the faculty and discente, qualifying them for a job market in continuous growth. The present work describes the project of automation of the laboratory that will be built at of UFRN. The system will be automated using a Programmable Logical Controller and a supervisory system. The programming of PLC and the screens of the supervisory system were developed in this work

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes the development of a Computer System for Analysis of Mammograms SCAM, that aids the doctor specialist in the identification and analysis of existent lesions in digital mammograms. The computer system for digital mammograms processing will make use of a group of techniques of Digital Image Processing (DIP), with the purpose of aiding the medical professional to extract the information contained in the mammogram. This system possesses an interface of easy use for the user, allowing, starting from the supplied mammogram, a group of processing operations, such as, the enrich of the images through filtering techniques, the segmentation of areas of the mammogram, the calculation the area of the lesions, thresholding the lesion, and other important tools for the medical professional's diagnosis. The Wavelet Transform will used and integrated into the computer system, with the objective of allowing a multiresolution analysis, thus supplying a method for identifying and analyzing microcalcifications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blind Source Separation (BSS) refers to the problem of estimate original signals from observed linear mixtures with no knowledge about the sources or the mixing process. Independent Component Analysis (ICA) is a technique mainly applied to BSS problem and from the algorithms that implement this technique, FastICA is a high performance iterative algorithm of low computacional cost that uses nongaussianity measures based on high order statistics to estimate the original sources. The great number of applications where ICA has been found useful reects the need of the implementation of this technique in hardware and the natural paralelism of FastICA favors the implementation of this algorithm on digital hardware. This work proposes the implementation of FastICA on a reconfigurable hardware platform for the viability of it's use in blind source separation problems, more specifically in a hardware prototype embedded in a Field Programmable Gate Array (FPGA) board for the monitoring of beds in hospital environments. The implementations will be carried out by Simulink models and it's synthesizing will be done through the DSP Builder software from Altera Corporation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Industrial automation networks is in focus and is gradually replacing older architectures of systems used in automation world. Among existing automation networks, most prominent standard is the Foundation Fieldbus (FF). This particular standard was chosen for the development of this work thanks to its complete application layer specification and its user interface, organized as function blocks and that allows interoperability among different vendors' devices. Nowadays, one of most seeked solutions on industrial automation are the indirect measurements, that consist in infering a value from measures of other sensors. This can be made through implementation of the so-called software sensors. One of the most used tools in this project and in sensor implementation are artificial neural networks. The absence of a standard solution to implement neural networks in FF environment makes impossible the development of a field-indirect-measurement project, besides other projects involving neural networks, unless a closed proprietary solution is used, which dos not guarantee interoperability among network devices, specially if those are from different vendors. In order to keep the interoperability, this work's goal is develop a solution that implements artificial neural networks in Foundation Fieldbus industrial network environment, based on standard function blocks. Along the work, some results of the solution's implementation are also presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Operating industrial processes is becoming more complex each day, and one of the factors that contribute to this growth in complexity is the integration of new technologies and smart solutions employed in the industry, such as the decision support systems. In this regard, this dissertation aims to develop a decision support system based on an computational tool called expert system. The main goal is to turn operation more reliable and secure while maximizing the amount of relevant information to each situation by using an expert system based on rules designed for a particular area of expertise. For the modeling of such rules has been proposed a high-level environment, which allows the creation and manipulation of rules in an easier way through visual programming. Despite its wide range of possible applications, this dissertation focuses only in the context of real-time filtering of alarms during the operation, properly validated in a case study based on a real scenario occurred in an industrial plant of an oil and gas refinery

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A serious problem that affects an oil refinery s processing units is the deposition of solid particles or the fouling on the equipments. These residues are naturally present on the oil or are by-products of chemical reactions during its transport. A fouled heat exchanger loses its capacity to adequately heat the oil, needing to be shut down periodically for cleaning. Previous knowledge of the best period to shut down the exchanger may improve the energetic and production efficiency of the plant. In this work we develop a system to predict the fouling on a heat exchanger from the Potiguar Clara Camarão Refinery, based on data collected in a partnership with Petrobras. Recurrent Neural Networks are used to predict the heat exchanger s flow in future time. This variable is the main indicator of fouling, because its value decreases gradually as the deposits on the tubes reduce their diameter. The prediction could be used to tell when the flow will have decreased under an acceptable value, indicating when the exchanger shutdown for cleaning will be needed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing need to develop new tools to help end users in tasks related to the design, monitoring, maintenance and commissioning of critical infrastructures. The complexity of the industrial environment, for example, requires that these tools have flexible features in order to provide valuable data for the designers at the design phases. Furthermore, it is known that industrial processes have stringent requirements for dependability, since failures can cause economic losses, environmental damages and danger to people. The lack of tools that enable the evaluation of faults in critical infrastructures could mitigate these problems. Accordingly, the said work presents developing a framework for analyzing of dependability for critical infrastructures. The proposal allows the modeling of critical infrastructure, mapping its components to a Fault Tree. Then the mathematical model generated is used for dependability analysis of infrastructure, relying on the equipment and its interconnections failures. Finally, typical scenarios of industrial environments are used to validate the proposal

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the area of food dehydration, drying of vegetables has a very representative position, it has the objective to preserve the surplus of crops and began with sun drying. Among the vegetable is the carrot, which had its origin in Southeast Asia and in Brazil is a vegetable cultivated enough. The principal objective of this works is to find alternative ways for the conservation of carrot slices by osmotic dehydration with additional drying in heart. Were initially defined the best conditions of pre-osmotic dehydration (temperature, immersion time, type of osmotic solution) based on the results of humidity loss, solid gain, weight reduction and efficiency ratio of predehydrated carrots slices. The osmotic solutions used were composed by NaCl (10%) and sucrose (50 ° Brix) named DO1 and sucrose (50 ° Brix) called DO2. Was made experiment of pre-osmotic dehydration of carrot slices in two temperature levels, with complementary drying in heart with air circulation at 70 º C. Sensory analysis was performed and the study of slices dehydration osmotically and the slices without osmotic treatment. The best results were obtained with the solution DO1 60°C with immersion time of 60 min. The drying of carrot slices presented period of constant rate and decreasing rate. The osmotic pre-treatment reduced the initial humidity of carrot slices, reducing the time to the product to reach the same humidity content. Fick's model, considering the shrinkage, and the Page s model, adapt satisfactorily to experimental datas, allowing the determination of effective diffusion coefficients, consistent with the references. The results of sensory analysis of dry product, showed greater acceptance of sliced carrots with osmotic treatment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The food industry is interested in natural products. Anthocyanins are phenolic antioxidants of great importance with health-relevant applications. Several studies have linked the intake of fruits and vegetables with reduced risk of chronic degenerative diseases because of its antioxidant properties. This study aimed to compare different strategies for obtaining natural pigments from red jambo (Syzygium malaccence) and analyze its functional potential. Two different strategies were studied: (1) solid-liquid extraction (SLE) in reactor with controlled parameters, (2) powder obtention. The investigation of the functional potential was conducted taking into account the total phenolic content (TPC), the antioxidant activity (AA), the total anthocyanins concentration (TA) and α-amylase and α-glucosidase inhibition. The best extracts obtained by SLE showed TPC of 174.15 mg GAE/100g, AA of 3.56 μmol Trolox eq/g and TA of 133.59 mg cyd-3-glu/100 g. The best results for the second strategy were TPC of 1024.22 mg GAE/100 g, AA of 29.03 μmol Trolox eq/g and TA of 1193.41 mg cyd-3- glu/100 g. It was observed moderate amylase inhibition (26.30%) and high glucosidase inhibitory activity (97.47%). Skin extracts showed, in general, superior results when compared to whole red jambo, with superior values for dehydrated products. Based on our result, red jambo can be considered as a rich source of phenolic antioxidants, as well on amylase and glucosidase inhibitors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generation of effluent from the finishing process in textile industry is a serious environmental problem and turned into an object of study in several scientific papers. Contamination with dyes and the presences of substances that are toxic to the environment characterize this difficult treatment effluent. Several processes have already been evaluated to remove and even degrade such pollutants are examples: coagulation-flocculation, biological treatment and advanced oxidative processes, but not yet sufficient to enable the recovery of dye or at least of the recovery agent. An alternative to this problem is the cloud point extraction that involves the application of nonionic surfactants at temperatures above the cloud point, making the water a weak solvent to the surfactant, providing the agglomeration of those molecules around the dyes molecules by affinity with the organic phase. After that, the formation of two phases occurred: the diluted one, poor in dye and surfactant, and the other one, coacervate, with higher concentrations of dye and surfactants than the other one. The later use of the coacervate as a dye and surfactant recycle shows the technical and economic viability of this process. In this paper, the cloud point extraction is used to remove the dye Reactive Blue from the water, using nonionic surfactant nonyl phenol with 9,5 etoxilations. The aim is to solubilize the dye molecules in surfactant, varying the concentration and temperature to study its effects. Evaluating the dye concentration in dilute phase after extraction, it is possible to analyze thermodynamic variables, build Langmuir isotherms, determine the behavior of the coacervate volume for a surfactant concentration and temperature, the distribution coefficient and the dye removal efficiency. The concentration of surfactant proved itself to be crucial to the success of the treatment. The results of removal efficiency reached values of 91,38%, 90,69%, 89,58%, 87,22% and 84,18% to temperatures of 65,0, 67,5, 70,0, 72,5 and 75,0°C, respectively, showing that the cloud point extraction is an efficient alternative for the treatment of wastewater containing Reactive Blue