172 resultados para Algoritmo memético


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital Elevation Models (DEM) are numerical representations of a portion of the earth surface. Among several factors which affect the quality of a DEM, it should be emphasized the attention on the input data and the choice of the interpolating algorithm. On the other hand, several numerical models are used nowadays to characterize nearshore hydrodynamics and morphological changes in coastal areas, whose validation is based on field data collection. Independent on the complexity of the physical processes which are modeled, little attention has been given to the intrinsic bathymetric interpolation built within the numerical models of the specific application. Therefore, this study aims to investigate and to quantify the influence of the bathymetry, as obtained by a DEM, on the hydrodynamic circulation model at a coastal stretch, off the coast of the State of Rio Grande do Norte, Northeast Brazil. This coastal region is characterized by strong hydrodynamic and littoral processes, resulting in a very dynamic morphology with shallow coastal bathymetry. Important economic activities, such as oil exploitation and production, fisheries, salt ponds, shrimp farms and tourism, also bring impacts upon the local ecosystems and influence themselves the local hydrodynamics. This fact makes the region one of the most important for the development of the State, but also enhances the possibility of serious environmental accidents. As a hydrodynamic model, SisBaHiA® - Environmental Hydrodynamics System ( Sistema Básico de Hidrodinâmica Ambiental ) was chosen, for it has been successfully employed at several locations along the Brazilian coast. This model was developed at the Coastal and Oceanographical Engineering Group of the Ocean Engineering Program at the Federal University of Rio de Janeiro. Several interpolating methods were tested for the construction of the DEM, namely Natural Neighbor, Kriging, Triangulation with Linear Interpolation, Inverse Distance to a Power, Nearest Neighbor, and Minimum Curvature, all implemented within the software Surfer®. The bathymetry which was used as reference for the DEM was obtained from nautical charts provided by the Brazilian Hydrographic Service of the Brazilian Navy and from a field survey conducted in 2005. Changes in flow velocity and free surface elevation were evaluated under three aspects: a spatial vision along three profiles perpendicular to the coast and one profile longitudinal to the coast as shown; a temporal vision from three central nodes of the grid during 30 days; a hodograph analysis of components of speed in U and V, by different tidal cycles. Small, but negligible, variations in sea surface elevation were identified. However, the differences in flow and direction of velocities were significant, depending on the DEM

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the need to deploy management and monitoring systems of natural resources in areas susceptible to environmental degradation, as is the case of semiarid regions, several works have been developed in order to find effective models and technically and economically viable. Therefore, this study aimed to estimate the daily actual evapotranspiration (ETr) through the application of the Surface Energy Balance Algorithm for Land (SEBAL), from remote sensing products, in a semiarid region, Seridó of the Rio Grande do Norte, and do the validation of these estimates using ETr values obtained by the Penman-Monteith (standard method of the Food and Agriculture Organization-FAO). The SEBAL is based on energy balance method, which allows obtaining the vertical latent heat flux (LE) with orbital images and, consequently, of the evapotranspiration through the difference of flows, also vertical, of heat in the soil (G), sensitive heat (H) and radiation balance (Rn). The study area includes the surrounding areas of the Dourado reservoir, located in the Currais Novos/RN city. For the implementation of the algorithm were used five images TM/Landsat-5. The work was divided in three chapters in order to facilitate a better discussion of each part of the SEBAL processing, distributed as follows: first chapter addressing the spatio-temporal variability of the biophysical variables; second chapter dealing with spatio-temporal distribution of instant and daily radiation balance; and the third chapter discussing the heart of the work, the daily actual evapotranspiration estimation and the validation than to the study area

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents a methodology to the optimization of a predial system of cold water distribution. It s about a study of a case applied to the Tropical Buzios Residential Condominium, located in the Búzio s Beach, Nísia Floresta city, the east coast of the Rio Grande do Norte state, twenty kilometers far from Natal. The design of cold water distribution networks according to Norm NBR 5626 of the ABNT - Brazilian Association of Techniques Norms, does not guarantee that the joined solution is the optimal solution of less cost. It s necessary the use of an optimization methodology, that supplies us, between all the possible solutions, the minimum cost solution. In the optimization process of the predial system of water distribution of the Tropical Búzios Condominium, is used Method Granados, that is an iterative algorithm of optimization, based on the Dynamic Programming, that supplies the minimum cost s network, in function of the piezometric quota of the reservoir. For the application of this Method in ramifies networks, is used a program of computer in C language. This process is divided in two stages: attainment of the previous solution and reduction of the piezometric quota of headboard. In the attainment of the previous solution, the minors possible diameters are used that guarantee the limit of maximum speed and the requirements of minimum pressures. The piezometric quota of headboard is raised to guarantee these requirements. In the second stage of the Granados Method, an iterative process is used and it objective is to reduce the quota of headboard gradually, considering the substitution of stretches of the network pipes for the subsequent diameters, considering a minimum addition of the network cost. The diameter change is made in the optimal stretch that presents the lesser Exchange Gradient. The process is locked up when the headboard quota of desired is reached. The optimized network s material costs are calculated, and is made the analysis of the same ones, through the comparison with the conventional network s costs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we deal with a class of composed networks that are formed by two tree networks, TP and TA, whose end points touches each other through a bipartite network BPA. We explore this network using a functional approach. We are interested in what extend the topology, or the structure, of TX (X = A or P) determines the links of BPA. This composed structure is an useful model in evolutionary biology, where TP and TA are the phylogenetic trees of plants and animals that interact in an ecological community. We use in this thesis two cases of mutualist interactions: frugivory and pollinator networks. We analyse how the phylogeny of TX determines or is correlated with BPA using a Monte Carlo approach. We use the phylogenetic distance among elements that interact with a given species to construct an index κ that quantifies the influence of TX over BPA. The algorithm is based in the assumption that interaction matrices that follows a phylogeny of TX have a total phylogenetic distance smaller than the average distance of an ensemble of Monte Carlo realizations generated by an adequate shuffling data. We find that the phylogeny of animals species has an effect on the ecological matrix that is more marked than plant phylogeny

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study physical process that control the stellar evolution is strength influenced by several stellar parameters, like as rotational velocity, convective envelope mass deepening, and magnetic field intensity. In this study we analyzed the interconnection of some stellar parameters, as Lithium abundance A(Li), chromospheric activity and magnetic field intensity as well as the variation of these parameters as a function of age, rotational velocity, and the convective envelope mass deepening for a selected sample of solar analogs and twins stars. In particular, we analyzed the convective envelope mass deepening and the dispersion of lithium abundance for these stars. We also studied the evolution of rotation in subgiants stars, because its belong to the following evolutionary stage of solar analogs, and twins stars. For this analyze, we compute evolutionary models with the TGEC code to derive the evolutionary stage, as well as the convective envelope mass deepening, and derive more precisely the stellar mass, and age for this 118 stars. Our Investigation shows a considerable dispersion of lithium abundance for the solar analogs stars. We also realize that this dispersion is not by the convective zone deep, in this way we observed which the scattering of A(Li) can not be explained by classical theories of mixing in the convective zone. In conclusion we have that are necessary extra-mixing process to explain this decrease of Lithium abundance in solar analogs and twins stars. We analyzed the subgiant stars because this are the subsequent evolutionary stage after the solar analogs and twins stars. For this analysis, we compute the rotational period for 30 subgiants stars observed by Co- RoT satellite. For this task we apply two different methods: Lomb-Scargle algorithm, and the Plavchan Periodogram. We apply the TGEC code we compute models with internal distribution of angular momentum to confront the predict results with the models, and the observational results. With this analyze, we showed which solid body rotation models are incompatible with the physical interpretation of observational results. As a result of our study we still concluded that the magnetic field, convective envelope mass deepening, and internal redistribution of angular momentum are essential to explain the evolution of low-mass stars, and its observational characteristics. Based on population synthesis simulation, we concluded that the solar neighborhood presents a considerable quantity of solar twins when compared with the discovered set nowadays. Altogether we foresee the existence around 400 solar analogs in the solar neighborhood (distance of 100 pc). We also study the angular momentum of solar analogs and twins, in this study we concluded that added angular momentum from a Jupiter type planet, putted in the Jupiter position, is not enough to explain the angular momentum predicted by Kraft law (Kraft 1970)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we presented an exhibition of the mathematical theory of orthogonal compact support wavelets in the context of multiresoluction analysis. These are particularly attractive wavelets because they lead to a stable and very efficient algorithm, that is Fast Transform Wavelet (FWT). One of our objectives is to develop efficient algorithms for calculating the coefficients wavelet (FWT) through the pyramid algorithm of Mallat and to discuss his connection with filters Banks. We also studied the concept of multiresoluction analysis, that is the context in that wavelets can be understood and built naturally, taking an important step in the change from the Mathematical universe (Continuous Domain) for the Universe of the representation (Discret Domain)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper we study a random strategy called MOSES, which was introduced in 1996 by Fran¸cois. Asymptotic results of this strategy; behavior of the stationary distributions of the chain associated to strategy, were derived by Fran¸cois, in 1998, of the theory of Freidlin and Wentzell [8]. Detailings of these results are in this work. Moreover, we noted that an alternative approach the convergence of this strategy is possible without making use of theory of Freidlin and Wentzell, yielding the visit almost certain of the strategy to uniform populations which contain the minimum. Some simulations in Matlab are presented in this work

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to evaluate the potential of near-infrared reflectance spectroscopy (NIRS) as a rapid and non-destructive method to determine the soluble solid content (SSC), pH and titratable acidity of intact plums. Samples of plum with a total solids content ranging from 5.7 to 15%, pH from 2.72 to 3.84 and titratable acidity from 0.88 a 3.6% were collected from supermarkets in Natal-Brazil, and NIR spectra were acquired in the 714 2500 nm range. A comparison of several multivariate calibration techniques with respect to several pre-processing data and variable selection algorithms, such as interval Partial Least Squares (iPLS), genetic algorithm (GA), successive projections algorithm (SPA) and ordered predictors selection (OPS), was performed. Validation models for SSC, pH and titratable acidity had a coefficient of correlation (R) of 0.95 0.90 and 0.80, as well as a root mean square error of prediction (RMSEP) of 0.45ºBrix, 0.07 and 0.40%, respectively. From these results, it can be concluded that NIR spectroscopy can be used as a non-destructive alternative for measuring the SSC, pH and titratable acidity in plums

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it