20 resultados para computational cost
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
Resumo:
This thesis describes design methodologies for frequency selective surfaces (FSSs) composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4, RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to the simple design of efficient band-stop spatial filters with applications in the range of microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or T fractal). The transmission properties of these proposed periodic arrays are investigated through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial softwares that run full-wave methods. To validate the employed methodology, FSS prototypes are selected for fabrication and measurement. The obtained results point to interesting features for FSS spatial filters: compactness, with high values of frequency compression factor; as well as stable frequency responses at oblique incidence of plane waves. This thesis also approaches, as it main focus, the application of an alternative electromagnetic (EM) optimization technique for analysis and synthesis of FSSs with fractal motifs. In application examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal design of FSS structures. Based on computational intelligence tools, the proposed technique overcomes the high computational cost associated to the full-wave parametric analyzes. To this end, fast and accurate multilayer perceptron (MLP) neural network models are developed using different parameters as design input variables. These neural network models aim to calculate the cost function in the iterations of population-based search algorithms. Continuous genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are used for FSSs optimization with specific resonant frequency and bandwidth. The performance of these algorithms is compared in terms of computational cost and numerical convergence. Consistent results can be verified by the excellent agreement obtained between simulations and measurements related to FSS prototypes built with a given fractal iteration
Resumo:
This study aims to seek a more viable alternative for the calculation of differences in images of stereo vision, using a factor that reduces heel the amount of points that are considered on the captured image, and a network neural-based radial basis functions to interpolate the results. The objective to be achieved is to produce an approximate picture of disparities using algorithms with low computational cost, unlike the classical algorithms
Resumo:
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations
Resumo:
In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments
Resumo:
Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
This work has as main objective the application of Artificial Neural Networks, ANN, in the resolution of problems of RF /microwaves devices, as for example the prediction of the frequency response of some structures in an interest region. Artificial Neural Networks, are presently a alternative to the current methods of analysis of microwaves structures. Therefore they are capable to learn, and the more important to generalize the acquired knowledge, from any type of available data, keeping the precision of the original technique and adding the low computational cost of the neural models. For this reason, artificial neural networks are being increasily used for modeling microwaves devices. Multilayer Perceptron and Radial Base Functions models are used in this work. The advantages/disadvantages of these models and the referring algorithms of training of each one are described. Microwave planar devices, as Frequency Selective Surfaces and microstrip antennas, are in evidence due the increasing necessities of filtering and separation of eletromagnetic waves and the miniaturization of RF devices. Therefore, it is of fundamental importance the study of the structural parameters of these devices in a fast and accurate way. The presented results, show to the capacities of the neural techniques for modeling both Frequency Selective Surfaces and antennas
Resumo:
This work proposes a computational methodology to solve problems of optimization in structural design. The application develops, implements and integrates methods for structural analysis, geometric modeling, design sensitivity analysis and optimization. So, the optimum design problem is particularized for plane stress case, with the objective to minimize the structural mass subject to a stress criterion. Notice that, these constraints must be evaluated at a series of discrete points, whose distribution should be dense enough in order to minimize the chance of any significant constraint violation between specified points. Therefore, the local stress constraints are transformed into a global stress measure reducing the computational cost in deriving the optimal shape design. The problem is approximated by Finite Element Method using Lagrangian triangular elements with six nodes, and use a automatic mesh generation with a mesh quality criterion of geometric element. The geometric modeling, i.e., the contour is defined by parametric curves of type B-splines, these curves hold suitable characteristics to implement the Shape Optimization Method, that uses the key points like design variables to determine the solution of minimum problem. A reliable tool for design sensitivity analysis is a prerequisite for performing interactive structural design, synthesis and optimization. General expressions for design sensitivity analysis are derived with respect to key points of B-splines. The method of design sensitivity analysis used is the adjoin approach and the analytical method. The formulation of the optimization problem applies the Augmented Lagrangian Method, which convert an optimization problem constrained problem in an unconstrained. The solution of the Augmented Lagrangian function is achieved by determining the analysis of sensitivity. Therefore, the optimization problem reduces to the solution of a sequence of problems with lateral limits constraints, which is solved by the Memoryless Quasi-Newton Method It is demonstrated by several examples that this new approach of analytical design sensitivity analysis of integrated shape design optimization with a global stress criterion purpose is computationally efficient
Resumo:
This work presents an optimization technique based on structural topology optimization methods, TOM, designed to solve problems of thermoelasticity 3D. The presented approach is based on the adjoint method of sensitivity analysis unified design and is intended to loosely coupled thermomechanical problems. The technique makes use of analytical expressions of sensitivities, enabling a reduction in the computational cost through the use of a coupled field adjoint equation, defined in terms the of temperature and displacement fields. The TOM used is based on the material aproach. Thus, to make the domain is composed of a continuous distribution of material, enabling the use of classical models in nonlinear programming optimization problem, the microstructure is considered as a porous medium and its constitutive equation is a function only of the homogenized relative density of the material. In this approach, the actual properties of materials with intermediate densities are penalized based on an artificial microstructure model based on the SIMP (Solid Isotropic Material with Penalty). To circumvent problems chessboard and reduce dependence on layout in relation to the final optimal initial mesh, caused by problems of numerical instability, restrictions on components of the gradient of relative densities were applied. The optimization problem is solved by applying the augmented Lagrangian method, the solution being obtained by applying the finite element method of Galerkin, the process of approximation using the finite element Tetra4. This element has the ability to interpolate both the relative density and the displacement components and temperature. As for the definition of the problem, the heat load is assumed in steady state, i.e., the effects of conduction and convection of heat does not vary with time. The mechanical load is assumed static and distributed
Resumo:
The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element
Resumo:
In this work we present a study of structural, electronic and optical properties, at ambient conditions, of CaSiO3, CaGeO3 and CaSnO3 crystals, all of them a member of Ca-perovskite class. To each one, we have performed density functional theory ab initio calculations within LDA and GGA approximations of the structural parameters, geometry optimization, unit cell volume, density, angles and interatomic length, band structure, carriers effective masses, total and partial density of states, dielectric function, refractive index, optical absorption, reflectivity, optical conductivity and loss function. A result comparative procedure was done between LDA and GGA calculations, a exception to CaSiO3 where only LDA calculation was performed, due high computational cost that its low symmetry crystalline structure imposed. The Ca-perovskite bibliography have shown the absence of electronic structure calculations about this materials, justifying the present work
Resumo:
Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.