92 resultados para Tributação ótima


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents a comparative analysis of methodologies about weighted factors considered in the selection of areas for deployment of Sanitary Landfills, applying the methodologies of classification criteria with scoring bands Gomes, Coelho, Erba & Veronez (2000); Waquil et al, 2000. That means, we have the Scoring System used by Union of Municipalities of Bahia and the Quality Index Landfill Waste (IQR) which are applyed for this study in Massaranduba Sanitary Landfill located in the municipality of Ceará Mirim /RN, northeastern of Brazil. The study was conducted in order to classify the methodologies and to give support for future studies on environmental management segment, with main goal to propose suited methodologies which allow safety and rigor during the selection, deployment and management of sanitary landfill, in the Brazilian municipalities, in order to help them in the process to extinction of their dumps, in according of Brazilian Nacional Plan of Solid Waste. During this investigation we have studied the characteristics of the site as morphological, hydrogeological, environmental and socio-economic to permit the installation. We consider important to mention the need of deployment from Rio Grande do Norte State Secretary of Environment and Water (SEMARH), Institute of Sustainable Development and Environment of RN (IDEMA), as well, from Federal and Municipal Governments a public policies for the integrated management of urban solid waste that address environmental preservation and improvement of health conditions of the population of the Rio Grande do Norte

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years there has been a significant growth in technologies that modify implant surfaces, reducing healing time and allowing their successful use in areas with low bone density. One of the most widely used techniques is plasma nitration, applied with excellent results in titanium and its alloys, with greater frequency in the manufacture of hip, ankle and shoulder implants. However, its use in dental implants is very limited due to high process temperatures (between 700 C o and 800 C o ), resulting in distortions in these geometrically complex and highly precise components. The aim of the present study is to assess osseointegration and mechanical strength of grade II nitrided titanium samples, through configuration of hollow cathode discharge. Moreover, new formulations are proposed to determine the optimum structural topology of the dental implant under study, in order to perfect its shape, make it efficient, competitive and with high definition. In the nitriding process, the samples were treated at a temperature of 450 C o and pressure of 150 Pa , during 1 hour of treatment. This condition was selected because it obtains the best wettability results in previous studies, where different pressure, temperature and time conditions were systematized. The samples were characterized by X-ray diffraction, scanning electron microscope, roughness, microhardness and wettability. Biomechanical fatigue tests were then conducted. Finally, a formulation using the three dimensional structural topology optimization method was proposed, in conjunction with an hadaptive refinement process. The results showed that plasma nitriding, using the hollow cathode discharge technique, caused changes in the surface texture of test specimens, increases surface roughness, wettability and microhardness when compared to the untreated sample. In the biomechanical fatigue test, the treated implant showed no flaws, after five million cycles, at a maximum fatigue load of 84.46 N. The results of the topological optimization process showed well-defined optimized layouts of the dental implant, with a clear distribution of material and a defined edge

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need to build durable structures and resistant to harsh environments enabled the development of high strength concrete, these activities generate a high cement consumption, which implies factor in CO2 emissions. Often the desired strength is not achieved using only the cement composition. This study aims to evaluate the influence of pozzolans with the addition of metakaolin on the physical mechanics of high strength concrete comparing them with the standard formulation. Assays were performed to characterize the aggregates according to NBR 7211, evaluation of cement and coarse aggregate through the trials of petrography (NBR 15577-3/08) and alkali-aggregate reaction (NBR 15577-05/08). Specimens were fabricated according to NBR 5738-1/04 with additions of 0%, 4%, 6%, 8% and 10% of metakaolin for cement mortars CP V in the formulations. For evaluation of the concrete hardened in fresh state and scattering assays were performed and compressive strength in accordance with the NBR 7223/1992 and NBR 5739-8/94 respectively. The results of the characterization of aggregates showed good characteristics regarding size analysis and petrography, as well as potentially innocuous as the alkali-aggregate reaction. As to the test of resistance to compression, all the formulations with the addition of metakaolin showed higher value at 28 days of disruption compared with the standard formulation. These results present an alternative to reduce CO2 emissions, and improvements in the quality and durability of concrete, because the fine particle size of metakaolin provides an optimal compression of the mass directly influencing the strength and rheology of the dough

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cashew-nut-shell-liquid (CNSL) is a phenolic oil that hás been due its their antioxirsion properties for use in fuels. The present work develops a method to the conversion of hidrogenated cardanol, that is the main component of the CNSL, in a compound with similar chacteristics to antioxidants used in products from petroleum. The antioxidants wasd obtained by exhaustive alkylation of the compound with tert-butyl chloride. After completing the optimization of several reaction steps, the product 2,4,6 tri-tert-butyl (pentadecylphenol) was obtained for the first tima. Characteeization and determination of physico-chemical properties were realized too, as well as wasd developed a study for check your application as an oxidative inhibitor by the molecular modeling. Estimation of process evalution was executed as well, where a rapid and practical computational methodology was utilizated in projects of the fine chemistry. The research showed satisfactory results and it could be concluded that the commercialization of this chemical products is feasible

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the petroleum industry, water is always present in the reservoir formation together with petroleum and natural gas and this fact provokes the production of water with petroleum, resulting in a great environmental impact. Several methods can be applied for treatment of oily waters, such as: gravitational vases, granulated media filtration systems, flotation process, centrifugation process and the use of hydrocyclones, which can also be used in a combined way. However, the flotation process has showed a great efficiency as compared with other methods, because these methods do not remove great part of the emulsified oil. In this work was investigated the use of surfactants derived from vegetable oils, OSS and OGS, as collectors, using the flotation process in a glass column with a porous plate filter in its base for the input of the gaseous steam. For this purpose, oil/water emulsions were prepared using mechanical stirring, with concentrations around 300 ppm. The air flow rate was set at 700 cm3/min and the porous plate filter used for the generation of the air bubbles has pore size varying from 16 to 40 Pm. The column operated at constant volume (1500mL). A new methodology has been developed to collect the samples, where, instead of collecting the water phase, it was collected the oil phase removed by the process in the top of the flotation column. It has been observed that it is necessary to find an optimum surfactant concentration to achieve enhanced removal efficiency. Being for OSS 1.275 mmol/L and for OGS 0.840 mmol/L, with removal efficiencies of 93% and 99%, respectively, using synthetic solutions. For the produced water, the removal in these concentrations was 75% for OSS and 65% for OGS. It is possible to remove oil from water in a flotation process using surfactants of high HLB, fact that is against the own definition of HLB (Hydrophile-Lipophile Balance). The interfacial tension is an important factor in the oil removal process using a flotation process, because it has direct interference in the coalescence of the oil drops. The spreading of the oil of the air bubble should be considered in the process, and for the optimum surfactant concentrations it reached a maximum value. The removal kinetics for the flotation process using surfactants in the optimum concentration has been adjusted according to a first order model, for synthetic water as for the produced water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the oil industry the mixture oil/water occurs in the operations of production, transportation and refining, as well as during the use of its derivatives. The amount of water produced associated with the oil varies and can reach values of 90% in volume in the case of mature phase of the production fields. The present work deals with the development of new design of the Mixer Settler based on Phase Inversion (MDIF) in a laboratory scale. We envisage this application in industrial scale so the phases of project, construction and operation are considered. The modifications most significant, in comparison with the original prototype, include the materials of construction and the substitution of the equipment used in the mixing stage of the process. It was tested the viability of substitution of the original system of mechanical mixing by a static mixer. A statistical treatment by means of an experimental design of composed central type was used in order to evaluate the behavior of the main variables of the separation process as function of the efficiency of separation for the new device. This procedure is useful to delimit an optimal region of operation with the equipment. The variables of process considered on the experimental design were: oil concentration in the feeding water (mg/L); Total volumetric flow rate (L/h); Ratio organic/water on volumetric basis (O/A). The separation efficiency is calculated by comparison of the content of oil and greases in the inlet and outlet of the equipment. For determination of TOG (Total Oil and Grease), the method used was based in the absorption of radiation in the infra-red region. The equipment used for these determinations was InfraCal® TOG/TPH Model HATR-T2 of the Wilks Enterprise, Incorporation. It´s important to stand out that this method of measure has being used by PETROBRAS S.A. Results of global efficiency of separation oil/water varied from 75.3 to 97.7% for contaminated waters containing up to 1664,1 mg/L of oil. By means of tests carried out with a real sample of contaminated water supplied by PETROBRAS we have got an effluent specified in terms of the legal standards required for discharging. Thus, the new design of equipment constitutes a real alternative for the conventional systems of treatment of produced water in the oil industry