927 resultados para Algoritmo para pesagem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents a methodology to the optimization of a predial system of cold water distribution. It s about a study of a case applied to the Tropical Buzios Residential Condominium, located in the Búzio s Beach, Nísia Floresta city, the east coast of the Rio Grande do Norte state, twenty kilometers far from Natal. The design of cold water distribution networks according to Norm NBR 5626 of the ABNT - Brazilian Association of Techniques Norms, does not guarantee that the joined solution is the optimal solution of less cost. It s necessary the use of an optimization methodology, that supplies us, between all the possible solutions, the minimum cost solution. In the optimization process of the predial system of water distribution of the Tropical Búzios Condominium, is used Method Granados, that is an iterative algorithm of optimization, based on the Dynamic Programming, that supplies the minimum cost s network, in function of the piezometric quota of the reservoir. For the application of this Method in ramifies networks, is used a program of computer in C language. This process is divided in two stages: attainment of the previous solution and reduction of the piezometric quota of headboard. In the attainment of the previous solution, the minors possible diameters are used that guarantee the limit of maximum speed and the requirements of minimum pressures. The piezometric quota of headboard is raised to guarantee these requirements. In the second stage of the Granados Method, an iterative process is used and it objective is to reduce the quota of headboard gradually, considering the substitution of stretches of the network pipes for the subsequent diameters, considering a minimum addition of the network cost. The diameter change is made in the optimal stretch that presents the lesser Exchange Gradient. The process is locked up when the headboard quota of desired is reached. The optimized network s material costs are calculated, and is made the analysis of the same ones, through the comparison with the conventional network s costs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we deal with a class of composed networks that are formed by two tree networks, TP and TA, whose end points touches each other through a bipartite network BPA. We explore this network using a functional approach. We are interested in what extend the topology, or the structure, of TX (X = A or P) determines the links of BPA. This composed structure is an useful model in evolutionary biology, where TP and TA are the phylogenetic trees of plants and animals that interact in an ecological community. We use in this thesis two cases of mutualist interactions: frugivory and pollinator networks. We analyse how the phylogeny of TX determines or is correlated with BPA using a Monte Carlo approach. We use the phylogenetic distance among elements that interact with a given species to construct an index κ that quantifies the influence of TX over BPA. The algorithm is based in the assumption that interaction matrices that follows a phylogeny of TX have a total phylogenetic distance smaller than the average distance of an ensemble of Monte Carlo realizations generated by an adequate shuffling data. We find that the phylogeny of animals species has an effect on the ecological matrix that is more marked than plant phylogeny

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study physical process that control the stellar evolution is strength influenced by several stellar parameters, like as rotational velocity, convective envelope mass deepening, and magnetic field intensity. In this study we analyzed the interconnection of some stellar parameters, as Lithium abundance A(Li), chromospheric activity and magnetic field intensity as well as the variation of these parameters as a function of age, rotational velocity, and the convective envelope mass deepening for a selected sample of solar analogs and twins stars. In particular, we analyzed the convective envelope mass deepening and the dispersion of lithium abundance for these stars. We also studied the evolution of rotation in subgiants stars, because its belong to the following evolutionary stage of solar analogs, and twins stars. For this analyze, we compute evolutionary models with the TGEC code to derive the evolutionary stage, as well as the convective envelope mass deepening, and derive more precisely the stellar mass, and age for this 118 stars. Our Investigation shows a considerable dispersion of lithium abundance for the solar analogs stars. We also realize that this dispersion is not by the convective zone deep, in this way we observed which the scattering of A(Li) can not be explained by classical theories of mixing in the convective zone. In conclusion we have that are necessary extra-mixing process to explain this decrease of Lithium abundance in solar analogs and twins stars. We analyzed the subgiant stars because this are the subsequent evolutionary stage after the solar analogs and twins stars. For this analysis, we compute the rotational period for 30 subgiants stars observed by Co- RoT satellite. For this task we apply two different methods: Lomb-Scargle algorithm, and the Plavchan Periodogram. We apply the TGEC code we compute models with internal distribution of angular momentum to confront the predict results with the models, and the observational results. With this analyze, we showed which solid body rotation models are incompatible with the physical interpretation of observational results. As a result of our study we still concluded that the magnetic field, convective envelope mass deepening, and internal redistribution of angular momentum are essential to explain the evolution of low-mass stars, and its observational characteristics. Based on population synthesis simulation, we concluded that the solar neighborhood presents a considerable quantity of solar twins when compared with the discovered set nowadays. Altogether we foresee the existence around 400 solar analogs in the solar neighborhood (distance of 100 pc). We also study the angular momentum of solar analogs and twins, in this study we concluded that added angular momentum from a Jupiter type planet, putted in the Jupiter position, is not enough to explain the angular momentum predicted by Kraft law (Kraft 1970)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we presented an exhibition of the mathematical theory of orthogonal compact support wavelets in the context of multiresoluction analysis. These are particularly attractive wavelets because they lead to a stable and very efficient algorithm, that is Fast Transform Wavelet (FWT). One of our objectives is to develop efficient algorithms for calculating the coefficients wavelet (FWT) through the pyramid algorithm of Mallat and to discuss his connection with filters Banks. We also studied the concept of multiresoluction analysis, that is the context in that wavelets can be understood and built naturally, taking an important step in the change from the Mathematical universe (Continuous Domain) for the Universe of the representation (Discret Domain)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper we study a random strategy called MOSES, which was introduced in 1996 by Fran¸cois. Asymptotic results of this strategy; behavior of the stationary distributions of the chain associated to strategy, were derived by Fran¸cois, in 1998, of the theory of Freidlin and Wentzell [8]. Detailings of these results are in this work. Moreover, we noted that an alternative approach the convergence of this strategy is possible without making use of theory of Freidlin and Wentzell, yielding the visit almost certain of the strategy to uniform populations which contain the minimum. Some simulations in Matlab are presented in this work

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to evaluate the effect of supplements feeding on growth of calves grazing a Panicum maximum cv. Mombaça pasture during the dry season. The experimental design was a randomized blocks with three treatments and three replications. The treatments were: mineral salt ad libidum; multiple mixture (MM) fed at 0.2% of live weight (PV); and, concentrate feed (SC) fed at 0.7% of PV. Thirty six weaned calves averaging eight months and 192 kg of initial live weight were utilized. The masses and pasture components, nutritive value and rate of forage growth were evaluated. Animal performance was measured as average daily gain (ADG) and live weight gain (LWG). The supplemental feeding was adjusted after weighing. There was no difference between periods for forage mass and leaf: stem ratio. The highest values for forage green mass, leaf blades mass and stem percentage were observed in the first trial period. The canopy height and the available forage on offer did not differ among treatments. The percentage of dead was higher for the last periods of evaluation. The leaf: stem ratio and the leaf percentage were greater in the second period. There was significant difference (p<0,05) among treatments for the ADG and were 250, 460 and 770 g/day for salt, MM and SC, respectively. The biggest LWG was observed in the treatment SC. contents of PB, DIVMO, NDF and LDA on leaf blades, thatched roofs and dead material dead not differ among treatments. The highest GPV was observed in the SC treatment. The contents of PB, DIVMO, NDF and LDA for leaf blades stem and dead material did not differ among treatments. Independent of the use supplements , it is possible to keep steers gaining weight, during dry season, since the stocking rate is appropriately adjusted

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste trabalho é analisada a aplicação de algoritmos heurísticos para o Modelo Híbrido Linear - Hybrid Linear Model (HLM) - no problema de planejamento da expansão de sistemas de transmissão. O HLM é um modelo relaxado que ainda não foi suficientemente explorado. Assim, é realizada uma análise das características do modelo matemático e das técnicas de solução que podem ser usadas para resolver este tipo de modelo. O trabalho analisa em detalhes um algoritmo heurístico construtivo para o HLM e faz uma extensão da modelagem e da técnica de solução para o planejamento multi-estágio da expansão de sistemas de transmissão. Dentro deste contexto, também é realizada uma avaliação da qualidade das soluções encontradas pelo HLM e as possibilidades de aplicação deste modelo em planejamento de sistemas de transmissão. Finalmente, são apresentados testes com sistemas conhecidos na literatura especializada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma