70 resultados para Algoritmos computacionais
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The increasing use of shallow seismic methods of high resolution, for investigations of geological problems, environmental or industrial, has impelled the development of techniques, flows and computational algorithms. The practice of applying techniques for processing this data, until recently it wasn t used and the interpretation of the data was made as they were acquired. In order to facilitate and contribute to the improvement of the practices adopted, was developed a free graphical application and open source, called OpenSeismic which is based on free software Seismic Un*x, widely used in the treatment of conventional seismic data used in the exploration of hydrocarbon reservoirs. The data used to validate the initiative were marine seismic data of high resolution, acquired by the laboratory of Geology and Marine Geophysics and Environmental Monitoring - GGEMMA, of the Federal University of Rio Grande do Norte UFRN, for the SISPLAT Project, located at the region of paleo-valley of the Rio Acu. These data were submitted to the processing flow developed by Gomes (2009), using the free software developed in this work, the OpenSeismic, as well other free software, the Seismic Un*x and the commercial software ProMAX, where despite its peculiarities has presented similar results
Resumo:
The objective of this research was to investigate monthly climatological, seasonal, annual and interdecadal of the reference evapotranspiration (ETo) in Acre state in order to better understand its spatial and temporal variability and identify possible trends in the region. The study was conducted with data from Rio Branco municipalities, the state capital, Tarauacá and Cruzeiro do Sul considering a 30-year period (1985-2014), from monthly data from weather stations surface of the National Institute of Meteorology. The methodology was held, first, the consistency of meteorological data. Thus, it was made the gap filling in the time series by means of multivariate techniques. Subsequently were performed statistical tests trend (Mann-Kendall) and homogeneity, by Sen's estimator of the magnitude of this trend is estimated, as well as computational algorithms containing parametric and non-parametric tests for two samples to identify from that year the trend has become significant. Finally, analysis of variance technique (ANOVA) was adopted in order to verify whether there were significant differences in average annual evapotranspiration between locations. The indirect method of Penman-Montheith parameterized by FAO was used to calculate the ETo. The results of this work through examination of the descriptive statistics showed that the ETo the annual average was 3.80, 2.92 and 2.86 mm day-1 year, to Rio Branco, Tarauacá and Cruzeiro do Sul, respectively. Featuring quite remarkable seasonal pattern with a minimum in June and a maximum in October, with Rio Branco to town one with the strongest signal (amplitudes) on the other hand, the Southern Cross presented the highest variability among the studied locations. By ANOVA it was found that the average annual statistically different for a significance level of 1% between locations, but the annual average between Cruzeiro do Sul and Tarauacá no statistically significant differences. For the three locations, the 2000s was the one with the highest ETo values associated with warmer waters of the North Atlantic basin and the 80s to lower values, associated with cooler waters of this basin. By analyzing the Mann-kendall and Sen estimator test, there was a trend of increasing the seasonal reference evapotranspiration (fall, winter and spring) on the order of 0.11 mm per decade and that from the years of 1990, 1996 and 2001 became statistically significant to the localities of Cruzeiro do Sul Tarauacá and Rio Branco, respectively. For trend analysis of meteorological parameters was observed positive trend in the 5% level of significance, for average temperature, minimum temperature and solar radiation.
Resumo:
The fluorescent proteins are an essential tool in many fields of biology, since they allow us to watch the development of structures and dynamic processes of cells in living tissue, with the aid of fluorescence microscopy. Optogenectics is another technique that is currently widely used in Neuroscience. In general, this technique allows to activate/deactivate neurons with the radiation of certain wavelengths on the cells that have ion channels sensitive to light, at the same time that can be used with fluorescent proteins. This dissertation has two main objectives. Initially, we study the interaction of light radiation and mice brain tissue to be applied in optogenetic experiments. In this step, we model absorption and scattering effects using mice brain tissue characteristics and Kubelka-Munk theory, for specific wavelengths, as a function of light penetration depth (distance) within the tissue. Furthermore, we model temperature variations using the finite element method to solve Pennes’ bioheat equation, with the aid of COMSOL Multiphysics Modeling Software 4.4, where we simulate protocols of light stimulation tipically used in optogenetics. Subsequently, we develop some computational algorithms to reduce the exposure of neuron cells to the light radiation necessary for the visualization of their emitted fluorescence. At this stage, we describe the image processing techniques developed to be used in fluorescence microscopy to reduce the exposure of the brain samples to continuous light, which is responsible for fluorochrome excitation. The developed techniques are able to track, in real time, a region of interest (ROI) and replace the fluorescence emitted by the cells by a virtual mask, as a result of the overlay of the tracked ROI and the fluorescence information previously stored, preserving cell location, independently of the time exposure to fluorescent light. In summary, this dissertation intends to investigate and describe the effects of light radiation in brain tissue, within the context of Optogenetics, in addition to providing a computational tool to be used in fluorescence microscopy experiments to reduce image bleaching and photodamage due to the intense exposure of fluorescent cells to light radiation.
Resumo:
Antenna arrays are able to provide high and controlled directivity, which are suitable for radiobase stations, radar systems, and point-to-point or satellite links. The optimization of an array design is usually a hard task because of the non-linear characteristic of multiobjective, requiring the application of numerical techniques, such as genetic algorithms. Therefore, in order to optimize the electronic control of the antenna array radiation pattem through genetic algorithms in real codification, it was developed a numerical tool which is able to positioning the array major lobe, reducing the side lobe levels, canceling interference signals in specific directions of arrival, and improving the antenna radiation performance. This was accomplished by using antenna theory concepts and optimization methods, mainly genetic algorithms ones, allowing to develop a numerical tool with creative genes codification and crossover rules, which is one of the most important contribution of this work. The efficiency of the developed genetic algorithm tool is tested and validated in several antenna and propagation applications. 11 was observed that the numerical results attend the specific requirements, showing the developed tool ability and capacity to handle the considered problems, as well as a great perspective for application in future works.
Resumo:
O Laboratório de Sistemas Inteligentes do Departamento de Engenharia de Computação e Automação da Universidade Federal do Rio Grande do Norte - UFRN -tem como um de seus projetos de pesquisa -Robosense -a construção de uma plataforma robótica móvel. Trata-se de um robô provido de duas rodas, acionadas de forma diferencial, dois braços, com 5 graus de liberdade cada, um cinturão de sonares e uma cabeça estéreo. Como objetivo principal do projeto Robosense, o robô deverá ser capaz de navegar por todo o prédio do LECA, desviando de obstáculos. O sistema de navegação do robô, responsável pela geração e seguimento de rotas, atuará em malha fechada. Ou seja, sensores serão utilizados pelo sistema com o intuito de informar ao robô a sua pose atual, incluindo localização e a configuração de seus recursos. Encoders (sensores especiais de rotação) foram instalados nas rodas, bem como em todos os motores dos dois braços da cabeça estéreo. Sensores de fim-de-curso foram instalados em todas as juntas da cabeça estéreo para que seja possível sua pré-calibração. Sonares e câmeras também farão parte do grupo de sensores utilizados no projeto. O robô contará com uma plataforma composta por, a princípio, dois computadores ligados a um barramento único para uma operação em tempo real, em paralelo. Um deles será responsável pela parte de controle dos braços e de sua navegação, tomando como base as informações recebidas dos sensores das rodas e dos próximos objetivos do robô. O outro computador processará todas as informações referentes à cabeça estéreo do robô, como as imagens recebidas das câmeras. A utilização de técnicas de imageamento estéreo torna-se necessária, pois a informação de uma única imagem não determina unicamente a posição de um dado ponto correspondente no mundo. Podemos então, através da utilização de duas ou mais câmeras, recuperar a informação de profundidade da cena. A cabeça estéreo proposta nada mais é que um artefato físico que deve dar suporte a duas câmeras de vídeo, movimentá-las seguindo requisições de programas (softwares) apropriados e ser capaz de fornecer sua pose atual. Fatores como velocidade angular de movimentação das câmeras, precisão espacial e acurácia são determinantes para o eficiente resultado dos algoritmos que nesses valores se baseiam
Resumo:
In development of Synthetic Agents for Education, the doubt still resides about what would be a behavior that could be considered, in fact, plausible for this agent's type, which can be considered as effective on the transmission of the knowledge by the agent and the function of emotions this process. The purpose of this labor has an investigative nature in an attempt to discover what aspects are important for this behavior consistent and practical development of a chatterbot with the function of virtual tutor, within the context of learning algorithms. In this study, we explained the agents' basics, Intelligent Tutoring Systems, bots, chatterbots and how these systems need to provide credibility to report on their behavior. Models of emotions, personality and humor to computational agents are also covered, as well as previous studies by other researchers at the area. After that, the prototype is detailed, the research conducted, a summary of results achieved, the architectural model of the system, vision of computing and macro view of the features implemented.
Uma análise experimental de algoritmos exatos aplicados ao problema da árvore geradora multiobjetivo
Resumo:
The Multiobjective Spanning Tree Problem is NP-hard and models applications in several areas. This research presents an experimental analysis of different strategies used in the literature to develop exact algorithms to solve the problem. Initially, the algorithms are classified according to the approaches used to solve the problem. Features of two or more approaches can be found in some of those algorithms. The approaches investigated here are: the two-stage method, branch-and-bound, k-best and the preference-based approach. The main contribution of this research lies in the fact that no research was presented to date reporting a systematic experimental analysis of exact algorithms for the Multiobjective Spanning Tree Problem. Therefore, this work can be a basis for other research that deal with the same problem. The computational experiments compare the performance of algorithms regarding processing time, efficiency based on the number of objectives and number of solutions found in a controlled time interval. The analysis of the algorithms was performed for known instances of the problem, as well as instances obtained from a generator commonly used in the literature
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
This master dissertation presents the study and implementation of inteligent algorithms to monitor the measurement of sensors involved in natural gas custody transfer processes. To create these algoritmhs Artificial Neural Networks are investigated because they have some particular properties, such as: learning, adaptation, prediction. A neural predictor is developed to reproduce the sensor output dynamic behavior, in such a way that its output is compared to the real sensor output. A recurrent neural network is used for this purpose, because of its ability to deal with dynamic information. The real sensor output and the estimated predictor output work as the basis for the creation of possible sensor fault detection and diagnosis strategies. Two competitive neural network architectures are investigated and their capabilities are used to classify different kinds of faults. The prediction algorithm and the fault detection classification strategies, as well as the obtained results, are presented
Resumo:
In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development
Resumo:
The objective in the facility location problem with limited distances is to minimize the sum of distance functions from the facility to the customers, but with a limit on each distance, after which the corresponding function becomes constant. The problem has applications in situations where the service provided by the facility is insensitive after a given threshold distance (eg. fire station location). In this work, we propose a global optimization algorithm for the case in which there are lower and upper limits on the numbers of customers that can be served
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures