831 resultados para Computational Complexity
Resumo:
Natural ventilation is an efficient bioclimatic strategy, one that provides thermal comfort, healthful and cooling to the edification. However, the disregard for quality environment, the uncertainties involved in the phenomenon and the popularization of artificial climate systems are held as an excuse for those who neglect the benefits of passive cooling. The unfamiliarity with the concept may be lessened if ventilation is observed in every step of the project, especially in the initial phase in which decisions bear a great impact in the construction process. The tools available in order to quantify the impact of projected decisions consist basically of the renovation rate calculations or computer simulations of fluids, commonly dubbed CFD, which stands for Computational Fluid Dynamics , both somewhat apart from the project s execution and unable to adapt for use in parametric studies. Thus, we chose to verify, through computer simulation, the representativeness of the results with a method of simplified air reconditioning rate calculation, as well as making it more compatible with the questions relevant to the first phases of the project s process. The case object consists of a model resulting from the recommendations of the Código de Obras de Natal/ RN, customized according to the NBR 15220. The study has shown the complexity in aggregating a CFD tool to the process and the need for a method capable of generating data at the compatible rate to the flow of ideas and are discarded during the project s development. At the end of our study, we discuss the necessary concessions for the realization of simulations, the applicability and the limitations of both the tools used and the method adopted, as well as the representativeness of the results obtained
Resumo:
Schistosomiasis is still an endemic disease in many regions, with 250 million people infected with Schistosoma and about 500,000 deaths per year. Praziquantel (PZQ) is the drug of choice for schistosomiasis treatment, however it is classified as Class II in the Biopharmaceutics Classification System, as its low solubility hinders its performance in biological systems. The use of cyclodextrins is a useful tool to increase the solubility and bioavailability of drugs. The aim of this work was to prepare an inclusion compound of PZQ and methyl-beta-cyclodextrin (MeCD), perform its physico-chemical characterization, and explore its in vitro cytotoxicity. SEM showed a change of the morphological characteristics of PZQ:MeCD crystals, and IR data supported this finding, with changes after interaction with MeCD including effects on the C-H of the aromatic ring, observed at 758 cm(-1). Differential scanning calorimetry measurements revealed that complexation occurred in a 1:1 molar ratio, as evidenced by the lack of a PZQ transition temperature after inclusion into the MeCD cavity. In solution, the PZQ UV spectrum profile in the presence of MeCD was comparable to the PZQ spectrum in a hydrophobic solvent. Phase solubility diagrams showed that there was a 5.5-fold increase in PZQ solubility, and were indicative of a type A(L) isotherm, that was used to determine an association constant (K(a)) of 140.8 M(-1). No cytotoxicity of the PZQ:MeCD inclusion compound was observed in tests using 3T3 cells. The results suggest that the association of PZQ with MeCD could be a good alternative for the treatment of schistosomiasis.
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections
Resumo:
Operating industrial processes is becoming more complex each day, and one of the factors that contribute to this growth in complexity is the integration of new technologies and smart solutions employed in the industry, such as the decision support systems. In this regard, this dissertation aims to develop a decision support system based on an computational tool called expert system. The main goal is to turn operation more reliable and secure while maximizing the amount of relevant information to each situation by using an expert system based on rules designed for a particular area of expertise. For the modeling of such rules has been proposed a high-level environment, which allows the creation and manipulation of rules in an easier way through visual programming. Despite its wide range of possible applications, this dissertation focuses only in the context of real-time filtering of alarms during the operation, properly validated in a case study based on a real scenario occurred in an industrial plant of an oil and gas refinery
Resumo:
The processing of materials through plasma has been growing enough in the last times in several technological applications, more specifically in surfaces treatment. That growth is due, mainly, to the great applicability of plasmas as energy source, where it assumes behavior thermal, chemical and/or physical. On the other hand, the multiplicity of simultaneous physical effects (thermal, chemical and physical interactions) present in plasmas increases the complexity for understanding their interaction with solids. In that sense, as an initial step for the development of that subject, the present work treats of the computational simulation of the heating and cooling processes of steel and copper samples immersed in a plasma atmosphere, by considering two experimental geometric configurations: hollow and plane cathode. In order to reach such goal, three computational models were developed in Fortran 90 language: an one-dimensional transient model (1D, t), a two-dimensional transient model (2D, t) and a two-dimensional transient model (2D, t) which take into account the presence of a sample holder in the experimental assembly. The models were developed based on the finite volume method and, for the two-dimensional configurations, the effect of hollow cathode on the sample was considered as a lateral external heat source. The main results obtained with the three computational models, as temperature distribution and thermal gradients in the samples and in the holder, were compared with those developed by the Laboratory of Plasma, LabPlasma/UFRN, and with experiments available in the literature. The behavior showed indicates the validity of the developed codes and illustrate the need of the use of such computational tool in that process type, due to the great easiness of obtaining thermal information of interest
Resumo:
In this paper is a totally automatic strategy proposed to reduce the complexity of patterns ( vegetation, building, soils etc.) that interact with the object 'road' in color images, thus reducing the difficulty of the automatic extraction of this object. The proposed methodology consists of three sequential steps. In the first step the punctual operator is applied for artificiality index computation known as NandA ( Natural and Artificial). The result is an image whose the intensity attribute is the NandA response. The second step consists in automatically thresholding the image obtained in the previous step, resulting in a binary image. This image usually allows the separation between artificial and natural objects. The third step consists in applying a preexisting road seed extraction methodology to the previous generated binary image. Several experiments carried out with real images made the verification of the potential of the proposed methodology possible. The comparison of the obtained result to others obtained by a similar methodology for road seed extraction from gray level images, showed that the main benefit was the drastic reduction of the computational effort.
Resumo:
Research in the area of teacher training in English as a Foreign Language (CELANI, 2003, 2004, 2010; PAIVA, 2000, 2003, 2005; VIEIRA-ABRAHÃO, 2010) articulates the complexity of beginning teachers classroom contexts aligned with teaching language as a social and professional practice of the teacher in training. To better understand this relationship, the present study is based on a corpus of transcribed interviews and questionnaires applied to 28 undergraduate students majoring in Letters/English emphasis, at a public university located in the interior of the Western Amazon region, soliciting their opinions about the reforms made in the curriculum of this Major. Interviews and questionnaires were used as data collection instruments to trace a profile of the students organized in Group 1, with freshmen and sophomore undergraduates who are following the 2009 curriculum, and Group 2, with junior and senior undergraduates who are following the 2006 curriculum. The objectives are to identify, to characterize and to analyze the types of pronouns, roles and social actors represented in the opinions of these students in relation to their teacher training curriculum. The theoretical support focuses on the challenge of historical and contemporary routes from English teachers initial education programs (MAGALHÃES; LIBERALLI, 2009; PAVAN; SILVA, 2010; ALVAREZ, 2010; VIANA, 2011; PAVAN, 2012). Our theoretical perspective is based on the Systemic Functional Grammar of Halliday (1994), Halliday and Hasan (1989), Halliday and Matthiessen (2004), Eggins (1994; 2004) and Thompson (2004). We focus on the concept of the Interpersonal meaning, specifically regarding the roles articulated in the studies by Delu (1991), Thompson and Thetela (1995), and in the Portuguese language such as Ramos (1997), Silva (2006) and Cabral (2009). Moreover, we ascribe van Leeuwen s (1997; 2003) theory of Representation of Social Actors as a theoretical framework in order to identify the sociological aspect of social actors represented in the students discourse. Within this scenario, the analysis unfolds on three levels: grammatical (pronouns), semantic (roles), and discursive (social actors). For the analysis of interpersonal realizations present in the students opinions, we use the computational program WordSmith Tools (SCOTT, 2010) and its applications Wordlist and Concord to quantify the occurrences of the pronouns I, You and They, which characterize the roles and social actors of the corpus. The results show that the students assigned the following roles to themselves: (i) apprentice to express their initial process of English language learning; (ii) freshman to reveal their choice of Major in Letters/English emphasis; (iii) future teacher to relate their expectations towards a practicing professional. To assign the roles to professors in the major, the students used the metaphor of modality (I think) to indicate the relationship of teacher training, while they are in the role of a student and as a future teacher. From these evidences the representation of the students as social actors emerges in roles such as: (i) active roles; (ii) passive roles and (iii) personalized roles. The social actors represented in the opinions of the students reflect the inclusion of these roles assigned to the actions expressed about their experiences and expectations derived from their teacher training classroom
Resumo:
This paper proposes the application of computational intelligence techniques to assist complex problems concerning lightning in transformers. In order to estimate the currents related to lightning in a transformer, a neural tool is presented. ATP has generated the training vectors. The input variables used in Artificial Neural Networks (ANN) were the wave front time, the wave tail time, the voltage variation rate and the output variable is the maximum current in the secondary of the transformer. These parameters can define the behavior and severity of lightning. Based on these concepts and from the results obtained, it can be verified that the overvoltages at the secondary of transformer are also affected by the discharge waveform in a similar way to the primary side. By using the tool developed, the high voltage process in the distribution transformers can be mapped and estimated with more precision aiding the transformer project process, minimizing empirics and evaluation errors, and contributing to minimize the failure rate of transformers. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.
Resumo:
The Car Rental Salesman Problem (CaRS) is a variant of the classical Traveling Salesman Problem which was not described in the literature where a tour of visits can be decomposed into contiguous paths that may be performed in different rental cars. The aim is to determine the Hamiltonian cycle that results in a final minimum cost, considering the cost of the route added to the cost of an expected penalty paid for each exchange of vehicles on the route. This penalty is due to the return of the car dropped to the base. This paper introduces the general problem and illustrates some examples, also featuring some of its associated variants. An overview of the complexity of this combinatorial problem is also outlined, to justify their classification in the NPhard class. A database of instances for the problem is presented, describing the methodology of its constitution. The presented problem is also the subject of a study based on experimental algorithmic implementation of six metaheuristic solutions, representing adaptations of the best of state-of-the-art heuristic programming. New neighborhoods, construction procedures, search operators, evolutionary agents, cooperation by multi-pheromone are created for this problem. Furtermore, computational experiments and comparative performance tests are conducted on a sample of 60 instances of the created database, aiming to offer a algorithm with an efficient solution for this problem. These results will illustrate the best performance reached by the transgenetic algorithm in all instances of the dataset
Resumo:
The constant increase of complexity in computer applications demands the development of more powerful hardware support for them. With processor's operational frequency reaching its limit, the most viable solution is the use of parallelism. Based on parallelism techniques and the progressive growth in the capacity of transistors integration in a single chip is the concept of MPSoCs (Multi-Processor System-on-Chip). MPSoCs will eventually become a cheaper and faster alternative to supercomputers and clusters, and applications developed for these high performance systems will migrate to computers equipped with MP-SoCs containing dozens to hundreds of computation cores. In particular, applications in the area of oil and natural gas exploration are also characterized by the high processing capacity required and would benefit greatly from these high performance systems. This work intends to evaluate a traditional and complex application of the oil and gas industry known as reservoir simulation, developing a solution with integrated computational systems in a single chip, with hundreds of functional unities. For this, as the STORM (MPSoC Directory-Based Platform) platform already has a shared memory model, a new distributed memory model were developed. Also a message passing library has been developed folowing MPI standard