998 resultados para Otimização multi-objetivo


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major and growing problems faced by modern society is the high production of waste and related effects they produce, such as environmental degradation and pollution of various ecosystems, with direct effects on quality of life. The thermal treatment technologies have been widely used in the treatment of these wastes and thermal plasma is gaining importance in processing blanketing. This work is focused on developing an optimized system of supervision and control applied to a processing plant and petrochemical waste effluents using thermal plasma. The system is basically composed of a inductive plasma torch reactors washing system / exhaust gases and RF power used to generate plasma. The process of supervision and control of the plant is of paramount importance in the development of the ultimate goal. For this reason, various subsidies were created in the search for greater efficiency in the process, generating events, graphics / distribution and storage of data for each subsystem of the plant, process execution, control and 3D visualization of each subsystem of the plant between others. A communication platform between the virtual 3D plant architecture and a real control structure (hardware) was created. The goal is to use the concepts of mixed reality and develop strategies for different types of controls that allow manipulating 3D plant without restrictions and schedules, optimize the actual process. Studies have shown that one of the best ways to implement the control of generation inductively coupled plasma techniques is to use intelligent control, both for their efficiency in the results is low for its implementation, without requiring a specific model. The control strategy using Fuzzy Logic (Fuzzy-PI) was developed and implemented, and the results showed satisfactory condition on response time and viability

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of several techniques applied to production processes oil is the artificial lift, using equipment in order to reduce the bottom hole pressure, providing a pressure differential, resulting in a flow increase. The choice of the artificial lift method depends on a detailed analysis of the some factors, such as initial costs of installation, maintenance, and the existing conditions in the producing field. The Electrical Submersible Pumping method (ESP) appears to be quite efficient when the objective is to produce high liquid flow rates in both onshore and offshore environments, in adverse conditions of temperature and in the presence of viscous fluids. By definition, ESP is a method of artificial lift in which a subsurface electric motor transforms electrical into mechanical energy to trigger a centrifugal pump of multiple stages, composed of a rotating impeller (rotor) and a stationary diffuser (stator). The pump converts the mechanical energy of the engine into kinetic energy in the form of velocity, which pushes the fluid to the surface. The objective of this work is to implement the optimization method of the flexible polyhedron, known as Modified Simplex Method (MSM) applied to the study of the influence of the modification of the input and output parameters of the centrifugal pump impeller in the channel of a system ESP. In the use of the optimization method by changing the angular parameters of the pump, the resultant data applied to the simulations allowed to obtain optimized values of the Head (lift height), lossless efficiency and the power with differentiated results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface defects on steel parts borne costs of smelting industries due to the need of rework. Sand molds are frequently used in foundry industries and largely responsible for providing surface defects. This study aims to optimize the levels of the molding process variables to minimize the occurrence of surface defects in steel castings in silica sand molds chemically linked by cold cure process. The methodology used the experimental design with split plot, being considered in the study the resin percentage factors in the mold formulation, addition of iron oxide, type of paint, the paint application method, amount of ink layers, use of hot air along the lines and waiting time of the mold before casting. They were analyzed as response variables erosion defects, sand inclusion, penetration, porosity and surface finish. Tensile strength tests were performed to evaluate the influence of factors on mechanical parameters and the microstructural parameters were carried out the analysis of X-ray diffraction, scanning electron microscopy (SEM) and thermal analysis (TG / DSC / dilatometry). The results elucidate that for the faulty erosion, the only significant factor with a 95% confidence level was the type of ink and the ink alumina-based superior results obtained. For the sand inclusion of defect, there were three significant factors, with best results obtained with alumina-based paint and spray applied using hot air in the mold before casting the metal. For the defect penetration, there were four significant factors, the best results being achieved with 0.8% of resin and addition of iron oxide in the molding formulation, the paint being applied by brush and standby time of 24 hours before leak. For the defect porosity with a 95% confidence level, no significant factors. For the defect surface finish, the best results were achieved with the 0.8% formulation of the resin in the mold and application of the paint brush. To obtain the levels of the factors that optimize all defects simultaneously, we performed a weighted average of the results of each type of fault, concluding that the best levels of the factors were: 0.8% resin and addition of iron oxide in the formulation of the template, application of two coats of paint applied with a brush or spray, using hot air in the mold before casting and 24 hours of waiting ready the mold before casting. These levels of the optimized factors were used in an experiment to confirm that ratified the results, helping to reduce rework and consequently reducing costs of cast steel parts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação, Mestrado, Contabilidade e Fiscalidade, Instituto Politécnico de Santarém, Escola Superior de Gestão e Tecnologia, 2016

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estágio foi desenvolvido na Águas do Oeste, no período compreendido entre outubro de 2013 e julho de 2014, tendo como objetivo a otimização energética dos sistemas de arejamento e agitação. A importância deste tema deve-se ao fato de estes sistemas serem responsáveis por cerca de 70% do consumo de uma ETAR. Identificaram-se possíveis equipamentos a intervencionar e através de pesquisa bibliográfica e consulta de experts possíveis soluções em termos de eficiência energética, sendo apresentado neste relatório um resumo das soluções identificadas. Neste estudo foram avaliados vários casos de estudo em ETAR que servem como base para muitas outras da concessão da Águas do Oeste. Nestas ETAR foram testadas algumas das soluções identificadas. Desta análise conclui-se que tanto a agitação como o arejamento efetuado na maioria das ETAR é inflexível a variações de carga e/ou excessivo existindo assim oportunidades de otimização tanto na diminuição de equipamentos por tanque como no corte de potência de arejamento e de agitação e na limpeza dos difusores, é muito importante para repor a eficiência de transferência dos mesmos em condições muito semelhantes às de origem (deve ser efetuada regularmente num período que dependerá de ETAR para ETAR). A etapa de desidratação na ETAR da Atouguia da Baleia pode com pequenas alterações ser efetuada sem acompanhamento, ou seja em períodos noturnos, podendo-se assim reduzir o tempo de funcionamento do agitador e arejador do tanque de lamas que atualmente funcionam em contínuo. Concluiu-se ainda que a otimização dos sistemas de arejamento e agitação é uma área onde existe um longo caminho a percorrer, onde há muito por testar e aplicar e que muitas vezes também passa por mudança de perceções da realidade e comportamentos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human brain stores, integrates, and transmits information recurring to millions of neurons, interconnected by countless synapses. Though neurons communicate through chemical signaling, information is coded and conducted in the form of electrical signals. Neuroelectrophysiology focus on the study of this type of signaling. Both intra and extracellular approaches are used in research, but none holds as much potential in high-throughput screening and drug discovery, as extracellular recordings using multielectrode arrays (MEAs). MEAs measure neuronal activity, both in vitro and in vivo. Their key advantage is the capability to record electrical activity at multiple sites simultaneously. Alzheimer’s disease (AD) is the most common neurodegenerative disease and one of the leading causes of death worldwide. It is characterized by neurofibrillar tangles and aggregates of amyloid-β (Aβ) peptides, which lead to the loss of synapses and ultimately neuronal death. Currently, there is no cure and the drugs available can only delay its progression. In vitro MEA assays enable rapid screening of neuroprotective and neuroharming compounds. Therefore, MEA recordings are of great use in both AD basic and clinical research. The main aim of this thesis was to optimize the formation of SH-SY5Y neuronal networks on MEAs. These can be extremely useful for facilities that do not have access to primary neuronal cultures, but can also save resources and facilitate obtaining faster high-throughput results to those that do. Adhesion-mediating compounds proved to impact cell morphology, viability and exhibition of spontaneous electrical activity. Moreover, SH-SY5Y cells were successfully differentiated and demonstrated acute effects on neuronal function after Aβ addition. This effect on electrical signaling was dependent on Aβ oligomers concentration. The results here presented allow us to conclude that the SH-SY5Y cell line can be successfully differentiated in properly coated MEAs and be used for assessing acute Aβ effects on neuronal signaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No panorama energético atual, medidas de desenvolvimento sustentável têm uma preponderância cada vez mais significativa e, sendo os edifícios responsáveis por 40% da energia consumida na EU, enquadra-se o desafio de integrar medidas de eficiência energética nos novos edifícios desde a fase de conceção. Sendo que este setor se encontra em contínua expansão, a redução dos consumos passará largamente pela otimização do comportamento térmico dos edifícios e dos sistemas energéticos que os equipam. No presente trabalho estudou-se o papel da inércia térmica na redução das necessidades de energia para climatização de edifícios com o objetivo de identificar estratégias destinadas ao melhoramento do comportamento térmico e desempenho energético de edifícios construídos com recurso à técnica construtiva LSF, caracterizados por uma fraca inércia térmica quando comparados com edifícios em tudo semelhantes mas construídos recorrendo a tecnologias convencionais sem esquecer as questões relacionadas com a respetiva viabilidade económica. Com resultado geral destaca-se desde logo a importância do local onde é mais benéfico adicionar massa térmica (paredes exteriores, cobertura, paredes interiores), assim como a necessidade de utilização de um material com elevada densidade energética e baixo custo. A análise comparativa dos diferentes modelos de edifício simulados com recurso ao software DesignBuilder/EnergyPlus, foi realizada recorrendo a uma metodologia em que cada modelo construtivo é avaliado considerando quatro níveis de isolamento térmico e duas condições de cargas térmicas internas. A análise energética e económica foi realizada tendo como referência um período de 20 anos. O custo das soluções construtivas foi maioritariamente obtido através da ferramenta computacional Gerador de Preços, da Cype, SA©, tendo-se considerado um consumo energético anual constante e igual às necessidades de climatização anuais, assim como taxas de atualização de capital e de inflação do custo da energia constantes. De uma forma geral conclui-se que edifícios do tipo LSF melhorados através da adição criteriosa de massa térmica em determinados elementos construtivos, apresentam necessidades de climatização anuais na maioria dos casos estudados, inferiores àquelas verificadas em edifícios convencionais com inércia térmica média/forte. Conclui-se, também, que o método construtivo LSF se apresenta mais eficaz em termos energéticos e económicos quando comparado com soluções semelhantes construídas com recurso a um método convencional. Na secção seguinte são identificadas as principais conclusões deste trabalho.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As lipases e os biossurfactantes são compostos produzidos por microrganismos através de fermentações em estado sólido (FES) ou sumberso (FSm), os quais são aplicáveis nas indústrias alimentícia e farmacêutica, na bioenergia e na biorremediação, entre outras. O objetivo geral deste trabalho foi otimizar a produção de lipases através de fermentação em estado sólido e fermentação submersa. Os fungos foram selecionados quanto à habilidade de produção de lipases através de FES e FSm e aqueles que apresentaram as maiores atividades lipolíticas foram utilizados na seleção de variáveis significativas e na otimização da produção de lipases nos dois modos de cultivo. Foram empregadas técnicas seqüenciais de planejamento experimental, incluindo planejamentos fracionários, completos e a metodologia de superfície de resposta para a otimização da produção de lipases. As variáveis estudadas na FES foram o pH, o tipo de farelo como fonte de carbono, a fonte de nitrogênio, o indutor, a concentração da fonte de nitrogênio, a concentração do indutor e a cepa do fungo. Na FSm, além das variáveis estudadas na FES, estudaram-se as variáveis concentração inicial de inóculo e agitação. As enzimas produzidas foram caracterizadas quanto à temperatura e pH ótimos e quanto à estabilidade a temperatura e pH. Nas condições otimizadas de produção de lipases, foi avaliada a correlação entre a produção de lipases e bioemulsificantes. Inicialmente foram isolados 28 fungos. Os fungos Aspergillus O- 4 e Aspergillus E-6 foram selecionados como bons produtores de lipases no processo de fermentação em estado sólido e os fungos Penicillium E-3, Trichoderma E-19 e Aspergillus O-8 como bons produtores de lipases através da fermentação submersa. As condições otimizadas para a produção de lipases através de fermentação em estado sólido foram obtidas utilizando-se o fungo Aspergillus O-4, farelo de soja, 2% de nitrato de sódio, 2% de azeite de oliva e pHs inferiores a 5, obtendo-se atividades lipolíticas máximas de 57 U. As condições otimizadas para a produção de lipases na fermentação submersa foram obtidas utilizando-se o fungo Aspergillus O-8, farelo de trigo, 4,5% de extrato de levedura, 2% de óleo de soja e pH 7,15. A máxima atividade obtida durante a etapa de otimização foi 6 U. As lipases obtidas por FES apresentaram atividades máximas a 35ºC e pH 6,0, enquanto que as obtidas por FSm apresentaram ótimos a 37ºC e pH 7,2. A estabilidade térmica das lipases produzidas via FSm foi superior a das lipases obtidas via FES, com atividades residuais de 72% e 26,8% após 1h de exposição a 90ºC e 60ºC, respectivamente. As lipases obtidas via FES foram mais estáveis em pH´s alcalinos, com atividades residuais superiores a 60% após 24 h de exposição, enquanto as lipases produzidas via FSm foram mais estáveis em pH´s ácidos, com 80% de atividade residual na faixa de pH entre 3,5 e 6,5. Na fermentação submersa a correlação entre a produção de lipases e a atividade emulsificante óleo em água (O/A) e água em óleo (A/O) dos extratos foi 95,4% e 86,8%, respectivamente, obtendo-se atividades emulsificantes máximas O/A e A/O de 2,95 UE e 42,7 UE. Embora a maior produção de lipases tenha sido obtida na fermentação em estado sólido, não houve produção concomitante de biossurfactantes. Os extratos da fermentação submersa apresentaram redução da tensão superficial de 50 mN m -1 para 28 mN m -1 e atividade antimicrobiana frente ao microrganismo S. aureus ATCC 25923, com potenciais antimicrobianos de 36 a 43% nos três primeiros dias de fermentação. A fermentação submersa foi a técnica que apresentou os melhores resultados de otimização da produção de lipases, bem como de produção simultânea de biossurfactantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nos dias de hoje as diferentes indústrias e sectores de atividade económica assentam os seus pilares de desenvolvimento na procura constante de fontes de melhoria, para que assim seja possível melhorar a relação qualidade / preço. Embora no setor industrial as melhorias e inovações tecnológicas surjam a cada dia, estas por si não chegam. Grande parte da otimização incorrida, quer na indústria de manufatura, quer na indústria de serviços, surge da “simples” eliminação de desperdícios, e da procura constante por fontes de melhoria. Com o objetivo traçado, a Grohe Portugal Componentes Sanitários, Lda. propôs a eliminação de desperdícios no âmbito do abastecimento de componentes às linhas de montagem existentes na sua fábrica em Albergaria-a-Velha. Este processo passa não só por uma otimização do tempo de abastecimento e das quantidades de abastecimento, mas também consiste na reestruturação das diferentes rotinas de abastecimento. Todo este processo de otimização estará assente no conceito de Mizusumashi. O Mizusumashi, ou comboio logístico como muitas vezes é referenciado, surge com o objetivo de separar a tarefa de abastecimento da função de montagem. A sua origem surge da adaptação do conceito de Milk Run (volta do leiteiro) à logística interna. Torna-se de relevo referir que, para que este “simples” conceito funcione com uma eficiência que proporcione a sua aplicação, são vastos os fatores que necessitam de ajustamentos ou mesmo, em alguns casos, de uma reestruturação completa. O trabalho desenvolvido nestas instalações fabris, e que culminou neste documento, teve como princípio a análise, avaliação e implementação de melhorias no sistema de abastecimento às linhas de montagem. Todo o processo de abastecimento foi analisado e desconstruído nas suas componentes, para que assim fosse possível desenhar o plano de reestruturação indicado. Foram implementadas melhorias de layout, tempos e tarefas. Os resultados foram positivos tendo em conta o objetivo inicial. Todo este plano foi pensado e documentado com o objetivo de tornar este sistema adaptável a possíveis mudanças. Foi possível então criar um sistema voltado para um plano de melhoria contínua. Com um abastecimento normalizado e rotinado a gestão de stocks é mais precisa diminuindo assim os desperdícios inerentes a estas funções.