17 resultados para Successive Overrelaxation method with 1 parameter
em Instituto Politécnico do Porto, Portugal
Resumo:
On the basis of its electrochemical behaviour a new flow-injection analysis (FIA) method with amperometric detection has been developed for quantification of the herbicide bentazone (BTZ) in estuarine waters. Standard solutions and samples (200 µL) were injected into a water carrier stream and both pH and ionic strength were automatically adjusted inside the manifold. Optimization of critical FIA conditions indicated that the best analytical results were obtained at an oxidation potential of 1.10 V, pH 4.5, and an overall flow-rate of 2.4 mL min–1. Analysis of real samples was performed by means of calibration curves over the concentration range 2.5x10–6 to 5.0x10–5 mol L–1, and results were compared with those obtained by use of an independent method (HPLC). The accuracy of the amperometric determinations was ascertained; errors relative to the comparison method were below 4% and sampling rates were approximately 100 samples h–1. The repeatability of the proposed method was calculated by assessing the relative standard deviation (%) of ten consecutive determinations of one sample; the value obtained was 2.1%.
Resumo:
A new flow-injection analytical procedure is proposed for the determination of the total amount of polyphenols in wines; the method is based on the formation of a colored complex between 4-aminoantipyrine and phenols, in the presence of an oxidizing reagent. The oxidizing agents hexacyanoferrate(III), peroxodisulfate, and tetroxoiodate(VII) were tested. Batch trials were first performed to select appropriate oxidizing agents, pH, and concentration ratios of reagents, on the basis of their effect on the stability of the colored complex. Conditions selected as a result of these trials were implemented in a flow-injection analytical system in which the influence of injection volume, flow rate, and reaction- coil length, was evaluated. Under the optimum conditions the total amount of polyphenols, expressed as gallic acid, could be determined within a concentration range of 36 to 544 mg L–1, and with a sensitivity of 344 L mol–1 cm–1 and an RSD <1.1%. The reproducibility of analytical readings was indicative of standard deviations <2%. Interference from sugars, tartaric acid, ascorbic acid, methanol, ammonium sulfate, and potassium chloride was negligible. The proposed system was applied to the determination of total polyphenols in red wines, and enabled analysis of approximately 55 samples h–1. Results were usually precise and accurate; the RSD was <3.9% and relative errors, by the Folin–Ciocalteu method, <5.1%.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.
Resumo:
Este estudo insere-se no âmbito da Geometria e pretende compreender a influência dos recursos didáticos utilizados no reconhecimento de propriedades e relações geométricas em figuras planas. De acordo com o objetivo do estudo formulamos duas questões orientadoras que se articulam entre si. - Que fragilidades apresentam os alunos, no reconhecimento de propriedades geométricas em figuras planas? - Que contributos resultam da utilização de materiais manipuláveis, na visualização espacial e investigação de propriedades geométricas? Com este estudo pretendemos reunir informação que contribua para aprofundar o conhecimento sobre o raciocínio geométrico dos alunos. Em termos metodológicos segue um método de investigação misto, com recolha de informação qualitativa de natureza interpretativa e quantitativa, na modalidade de estudo de caso. A recolha de dados foi realizada numa turma de 4.º ano do ensino básico onde foi desenvolvida a experiência didática. A informação recolhida resultou da observação direta e as fontes dos dados foram as produções dos alunos, as notas de campo, registos fotográficos, vídeo e áudio. A docente assumiu o papel de investigadora e orientadora das tarefas propostas aos alunos tendo estes desempenhado um papel ativo na construção do seu próprio conhecimento. Os resultados obtidos permitem evidenciar as fragilidades dos alunos no reconhecimento de propriedades geométricas de figuras planas em diferentes posições. Destacam ainda os contributos da utilização da Mira e do Tangram, no estudo da simetria e no desenvolvimento da visualização espacial para a concretização de aprendizagens concretas, motivadoras e significativas.
Resumo:
This work proposes a new biomimetic sensor material for trimethoprim. It is prepared by means of radical polymerization, having trimethylolpropane trimethacrylate as cross-linker, benzoyl peroxide as radicalar iniciator, chloroform as porogenic solvent, and methacrylic acid and 2-vinyl pyridine as monomers. Different percentages of sensor in a range between 1 and 6% were studied. Their behavior was compared to that obtained with ion-exchanger quaternary ammonium salt (additive tetrakis(p-chlorophenyl)borate or tetraphenylborate). The effect of an anionic additive in the sensing membrane was also tested. Trimethoprim sensors with 1% of imprinted particles from methacrylic acid monomers showed the best response in terms of slope (59.7 mV/decade) and detection limit (4.01×10−7 mol/L). These electrodes displayed also a good selectivity towards nickel, manganese aluminium, ammonium, lead, potassium, sodium, iron, chromium, sulfadiazine, alanine, cysteine, tryptophan, valine and glycine. The sensors were not affected by pH changes from 2 to 6. They were successfully applied to the analysis of water from aquaculture.
Resumo:
In this work we employed a hybrid method, combining RF-magnetron sputtering with evaporation, for the deposition of tailor made metallic precursors, with varying number of Zn/Sn/Cu (ZTC) periods and compared two approaches to sulphurization. Two series of samples with 1×, 2× and 4× ZTC periods have been prepared. One series of precursors was sulphurized in a tubular furnace directly exposed to a sulphur vapour and N2+5% H2 flux at a pressure of 5.0×10+4 Pa. A second series of identical precursors was sulphurized in the same furnace but inside a graphite box where sulphur pellets have been evaporated again in the presence of N2+5% H2 and at the same pressure as for the sulphur flux experiments. The morphological and chemical analyses revealed a small grain structure but good average composition for all three films sulphurized in the graphite box. As for the three films sulphurized in sulphur flux grain growth was seen with the increase of the number of ZTC periods whilst, in terms of composition, they were slightly Zn poor. The films' crystal structure showed that Cu2ZnSnS4 is the dominant phase. However, in the case of the sulphur flux films SnS2 was also detected. Photoluminescence spectroscopy studies showed an asymmetric broad band emission whichoccurs in the range of 1–1.5 eV. Clearly the radiative recombination efficiency is higher in the series of samples sulphurized in sulphur flux. We have found that sulphurization in sulphur flux leads to better film morphology than when the process is carried out in a graphite box in similar thermodynamic conditions. Solar cells have been prepared and characterized showing a correlation between improved film morphology and cell performance. The best cells achieved an efficiency of 2.4%.
Resumo:
Known algorithms capable of scheduling implicit-deadline sporadic tasks over identical processors at up to 100% utilisation invariably involve numerous preemptions and migrations. To the challenge of devising a scheduling scheme with as few preemptions and migrations as possible, for a given guaranteed utilisation bound, we respond with the algorithm NPS-F. It is configurable with a parameter, trading off guaranteed schedulable utilisation (up to 100%) vs preemptions. For any possible configuration, NPS-F introduces fewer preemptions than any other known algorithm matching its utilisation bound. A clustered variant of the algorithm, for systems made of multicore chips, eliminates (costly) off-chip task migrations, by dividing processors into disjoint clusters, formed by cores on the same chip (with the cluster size being a parameter). Clusters are independently scheduled (each, using non-clustered NPS-F). The utilisation bound is only moderately affected. We also formulate an important extension (applicable to both clustered and non-clustered NPS-F) which optimises the supply of processing time to executing tasks and makes it more granular. This reduces processing capacity requirements for schedulability without increasing preemptions.
Resumo:
The trajectory planning of redundant robots is an important area of research and efficient optimization algorithms are needed. The pseudoinverse control is not repeatable, causing drift in joint space which is undesirable for physical control. This paper presents a new technique that combines the closed-loop pseudoinverse method with genetic algorithms, leading to an optimization criterion for repeatable control of redundant manipulators, and avoiding the joint angle drift problem. Computer simulations performed based on redundant and hyper-redundant planar manipulators show that, when the end-effector traces a closed path in the workspace, the robot returns to its initial configuration. The solution is repeatable for a workspace with and without obstacles in the sense that, after executing several cycles, the initial and final states of the manipulator are very close.
Resumo:
Buildings account for 40% of total energy consumption in the European Union. The reduction of energy consumption in the buildings sector constitute an important measure needed to reduce the Union's energy dependency and greenhouse gas emissions. The Portuguese legislation incorporate this principles in order to regulate the energy performance of buildings. This energy performance should be accompanied by good conditions for the occupants of the buildings. According to EN 15251 (2007) the four factors that affect the occupant comfort in the buildings are: Indoor Air Quality (IAQ), thermal comfort, acoustics and lighting. Ventilation directly affects all except the lighting, so it is crucial to understand the performance of it. The ventilation efficiency concept therefore earn significance, because it is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the spaces. The two indicators most internationally accepted are the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE). Nowadays with the developed of the Computational Fluid Dynamics (CFD) the behaviour of ventilation can be more easily predicted. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by the MicroFlo module of the IES-VE software for this two indicators. The main conclusions from this work were: that the values of the numerical simulations are in agreement with experimental measurements; the value of the CRE is more dependent of the position of the contamination source, that the strategy used for the air diffusion; the ACE indicator is more appropriate for quantifying the quality of the air diffusion; the solutions to be adopted, to maximize the ventilation efficiency should be, the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.
Resumo:
The ventilation efficiency concept is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the building spaces. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by Computational fluid dynamics (CFD). Were compared the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE), the two indicators most internationally accepted. The main results from this work shows that the values of the numerical simulations are in good agreement with experimental measurements and also, that the solutions to be adopted for maximizing the ventilation efficiency should be the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.
Resumo:
A new analytical methodology, based on liquid chromatography with fluorescence detection (LC-FD), after extraction, enzymatic hydrolysis, and solid-phase extraction (SPE) through Oasis HLB cartridges, was developed and validated for the simultaneous determination of three monohydroxy derivatives of polycyclic aromatic hydrocarbons (PAHs). The optimized analytical method is sensitive, accurate, and precise, with recoveries between 62 and 110% and limits of detection of 227, 9, and 45 ng/g for 1-hydroxynaphthalene, 2-hydroxyfluorene, and 1-hydroxypyrene, respectively. Their levels were estimated in different cephalopod matrices (edible tissues and hemolymph). The methodology was applied to samples of the major cephalopod species consumed worldwide. Of the 18 samples analyzed, 39% were contaminated with 1-hydroxynaphthalene, which was the only PAH metabolite detected. Its concentration ranged from 786 to 1145 ng/g. This highly sensitive and specific method allows the identification and quantitation of PAH metabolites in forthcoming food safety and environmental monitoring programs.
Resumo:
O planeamento de redes de distribuição tem como objetivo assegurar a existência de capacidade nas redes para a fornecimento de energia elétrica com bons níveis de qualidade de serviço tendo em conta os fatores económicos associados. No âmbito do trabalho apresentado na presente dissertação, foi elaborado um modelo de planeamento que determina a configuração de rede resultante da minimização de custos associados a: 1) perdas por efeito de joule; 2) investimento em novos componentes; 3) energia não entregue. A incerteza associada ao valor do consumo de cada carga é modelada através de lógica difusa. O problema de otimização definido é resolvido pelo método de decomposição de benders que contempla dois trânsitos de potências ótimos (modelo DC e modelo AC) no problema mestre e escravo respectivamente para validação de restrições. Foram também definidos critérios de paragem do método de decomposição de benders. O modelo proposto classifica-se como programação não linear inteira mista e foi implementado na ferramenta de otimização General Algebraic Modeling System (GAMS). O modelo desenvolvido tem em conta todos componentes das redes para a otimização do planeamento, conforme podemos analisar nos casos de estudo implementados. Cada caso de estudo é definido pela variação da importância que cada uma das variáveis do problema toma, tendo em vista cobrir de alguma todos os cenários de operação expetáveis. Através destes casos de estudo verifica-se as várias configurações que a rede pode tomar, tendo em conta as importâncias atribuídas a cada uma das variáveis, bem como os respetivos custos associados a cada solução. Este trabalho oferece um considerável contributo no âmbito do planeamento de redes de distribuição, pois comporta diferentes variáveis para a execução do mesmo. É também um modelo bastante robusto não perdendo o ‘norte’ no encontro de solução para redes de grande dimensão, com maior número de componentes.
Resumo:
Com a massificação do uso da tecnologia no dia-a-dia, os sistemas de localização têm vindo a aumentar a sua popularidade, devido à grande diversidade de funcionalidades que proporcionam e aplicações a que se destinam. No entanto, a maior parte dos sistemas de posicionamento não funcionam adequadamente em ambientes indoor, impedindo o desenvolvimento de aplicações de localização nestes ambientes. Os acelerómetros são muito utilizados nos sistemas de localização inercial, pelas informações que fornecem acerca das acelerações sofridas por um corpo. Para tal, neste trabalho, recorrendo à análise do sinal de aceleração provindo de um acelerómetro, propõe-se uma técnica baseada na deteção de passos para que, em aplicações futuras, possa constituir-se como um recurso a utilizar para calcular a posição do utilizador dentro de um edifício. Neste sentido, este trabalho tem como objetivo contribuir para o desenvolvimento da análise e identificação do sinal de aceleração obtido num pé, por forma a determinar a duração de um passo e o número de passos dados. Para alcançar o objetivo de estudo foram analisados, com recurso ao Matlab, um conjunto de 12 dados de aceleração (para marcha normal, rápida e corrida) recolhidos por um sistema móvel (e provenientes de um acelerómetro). A partir deste estudo exploratório tornou-se possível apresentar um algoritmo baseado no método de deteção de pico e na utilização de filtros de mediana e Butterworth passa-baixo para a contagem de passos, que apresentou bons resultados. Por forma a validar as informações obtidas nesta fase, procedeu-se, seguidamente, à realização de um conjunto de testes experimentais a partir da recolha de 33 novos dados para a marcha e corrida. Identificaram-se o número de passos efetuados, o tempo médio de passo e da passada e a percentagem de erro como as variáveis em estudo. Obteve-se uma percentagem de erro igual a 1% para o total dos dados recolhidos de 20, 100, 500 e 1000 passos com a aplicação do método proposto para a contagem do passo. Não obstante as dificuldades observadas na análise dos sinais de aceleração relativos à corrida, o algoritmo proposto mostrou bom desempenho, conseguindo valores próximos aos esperados. Os resultados obtidos permitem afirmar que foi possível atingir-se o objetivo de estudo com sucesso. Sugere-se, no entanto, o desenvolvimento de futuras investigações de forma a alargar estes resultados em outras direções.