998 resultados para Técnicas bacteriológicas


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes the development of an intelligent system for analysis of digital mammograms, capable to detect and to classify masses and microcalcifications. The digital mammograms will be pre-processed through techniques of digital processing of images with the purpose of adapting the image to the detection system and automatic classification of the existent calcifications in the suckles. The model adopted for the detection and classification of the mammograms uses the neural network of Kohonen by the algorithm Self Organization Map - SOM. The algorithm of Vector quantization, Kmeans it is also used with the same purpose of the SOM. An analysis of the performance of the two algorithms in the automatic classification of digital mammograms is developed. The developed system will aid the radiologist in the diagnosis and accompaniment of the development of abnormalities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A demanda por manga tem apresentado crescimento significativo no mercado internacional. No Brasil, a produção de manga apresenta grande potencial de crescimento para exportação. A utilização de técnicas de indução floral e póscolheita tem permitido explorar brechas de mercado, no momento em que se reduz a oferta dos países concorrentes. O Estado de São Paulo tem aumentado a produção e a exportação de manga na última década. O objetivo desse trabalho foi analisar a relação entre a adoção de técnicas de melhoria da qualidade da manga produzida, exigidas na exportação, e a expansão da cultura no Estado de São Paulo. Para a análise da qualidade da manga produzida, utilizou-se como parâmetro o grau de adequação do produtor às normas exigidas nos mercados consumidores, em duas cidades do Estado. A pesquisa de campo mostrou que a adequação aos requisitos exigidos de qualidade tem conformado regiões no Estado de São Paulo, onde o aumento de produção está diretamente relacionado com as exportações. Adicionalmente, verificou-se que essa atividade tem apresentado um retorno econômico atraente aos produtores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a theoretical and numerical analysis for the cascading of frequency selective surfaces, which uses rectangular patches and triangular Koch fractals as elements. Two cascading techniques are used to determine the transmission and reflection characteristics. Frequency selective surfaces includes a large area of Telecommunications and have been widely used due to its low cost, low weight and ability to integrate with others microwaves circuits. They re especially important in several applications, such as airplane, antennas systems, radomes, rockets, missiles, etc.. FSS applications in high frequency ranges have been investigated, as well as applications of cascading structures or multi-layer, and active FSS. Furthermore, the analyses uses the microwave circuit theory, with the Floquet harmonics, it allows to obtain the expressions of the scattering parameters of each structure and also of the composed structure of two or more FSS. In this work, numeric results are presented for the transmission characteristics. Comparisons are made with experimental results and simulated results using the commercial software Ansoft Designer® v3. Finally, some suggestions are presented for future works on this subject

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with the development of an experimental study on a power supply of high frequency that provides the toch plasmica to be implemented in PLASPETRO project, which consists of two static converters developed by using Insulated Gate Bipolar Transistor (IGBT). The drivers used to control these keys are triggered by Digital Signal Processor (DSP) through optical fibers to reduce problems with electromagnetic interference (EMI). The first stage consists of a pre-regulator in the form of an AC to DC converter with three-phase boost power factor correction which is the main theme of this work, while the second is the source of high frequency itself. A series-resonant inverter consists of four (4) cell inverters operating in a frequency around 115 kHz each one in soft switching mode, alternating itself to supply the load (plasma torch) an alternating current with a frequency of 450 kHz. The first stage has the function of providing the series-resonant inverter a DC voltage, with the value controlled from the power supply provided by the electrical system of the utility, and correct the power factor of the system as a whole. This level of DC bus voltage at the output of the first stage will be used to control the power transferred by the inverter to the load, and it may vary from 550 VDC to a maximum of 800 VDC. To control the voltage level of DC bus driver used a proportional integral (PI) controller and to achieve the unity power factor it was used two other proportional integral currents controllers. Computational simulations were performed to assist in sizing and forecasting performance. All the control and communications needed to stage supervisory were implemented on a DSP

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neuro-fuzzy system consists of two or more control techniques in only one structure. The main characteristic of this structure is joining one or more good aspects from each technique to make a hybrid controller. This controller can be based in Fuzzy systems, artificial Neural Networks, Genetics Algorithms or rein forced learning techniques. Neuro-fuzzy systems have been shown as a promising technique in industrial applications. Two models of neuro-fuzzy systems were developed, an ANFIS model and a NEFCON model. Both models were applied to control a ball and beam system and they had their results and needed changes commented. Choose of inputs to controllers and the algorithms used to learning, among other information about the hybrid systems, were commented. The results show the changes in structure after learning and the conditions to use each one controller based on theirs characteristics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The introduction of new digital services in the cellular networks, in transmission rates each time more raised, has stimulated recent research that comes studying ways to increase the data communication capacity and to reduce the delays in forward and reverse links of third generation WCDMA systems. These studies have resulted in new standards, known as 3.5G, published by 3GPP group, for the evolution of the third generation of the cellular systems. In this Masters Thesis the performance of a 3G WCDMA system, with diverse base stations and thousand of users is developed with assists of the planning tool NPSW. Moreover the performance of the 3.5G techniques hybrid automatic retransmission and multi-user detection with interference cancellation, candidates for enhance the WCDMA uplink capacity, is verified by means of computational simulations in Matlab of the increase of the data communication capacity and the reduction of the delays in the retransmission of packages of information

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated