45 resultados para Inverse methods
em Instituto Politécnico do Porto, Portugal
Resumo:
The present research paper presents five different clustering methods to identify typical load profiles of medium voltage (MV) electricity consumers. These methods are intended to be used in a smart grid environment to extract useful knowledge about customer’s behaviour. The obtained knowledge can be used to support a decision tool, not only for utilities but also for consumers. Load profiles can be used by the utilities to identify the aspects that cause system load peaks and enable the development of specific contracts with their customers. The framework presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partition, which is supported by cluster validity indices. The process ends with the analysis of the discovered knowledge. To validate the proposed framework, a case study with a real database of 208 MV consumers is used.
Resumo:
Intensive use of Distributed Generation (DG) represents a change in the paradigm of power systems operation making small-scale energy generation and storage decision making relevant for the whole system. This paradigm led to the concept of smart grid for which an efficient management, both in technical and economic terms, should be assured. This paper presents a new approach to solve the economic dispatch in smart grids. The proposed methodology for resource management involves two stages. The first one considers fuzzy set theory to define the natural resources range forecast as well as the load forecast. The second stage uses heuristic optimization to determine the economic dispatch considering the generation forecast, storage management and demand response
Resumo:
In the context of electricity markets, transmission pricing is an important tool to achieve an efficient operation of the electricity system. The electricity market is influenced by several factors; however the transmission network management is one of the most important aspects, because the network is a natural monopoly. The transmission tariffs can help to regulate the market, for this reason transmission tariffs must follow strict criteria. This paper presents the following methods to tariff the use of transmission networks by electricity market players: Post-Stamp Method; MW-Mile Method Distribution Factors Methods; Tracing Methodology; Bialek’s Tracing Method and Locational Marginal Price. A nine bus transmission network is used to illustrate the application of the tariff methods.
Resumo:
This paper proposes two meta-heuristics (Genetic Algorithm and Evolutionary Particle Swarm Optimization) for solving a 15 bid-based case of Ancillary Services Dispatch in an Electricity Market. A Linear Programming approach is also included for comparison purposes. A test case based on the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is used to demonstrate that the use of meta-heuristics is suitable for solving this kind of optimization problem. Faster execution times and lower computational resources requirements are the most relevant advantages of the used meta-heuristics when compared with the Linear Programming approach.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tool must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case based on California Independent System Operator (CAISO) data concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
GOAL: The manufacturing and distribution of strips of instant thin - layer chromatography with silica gel (ITLC - SG) (reference method) is currently discontinued so there is a need for an alternative method f or the determination of radiochemical purity (RCP) of 99m Tc - tetrofosmin. This study aims to compare five alternative methods proposed by the producer to determine the RCP of 99m Tc - tetrofosmin. METHODS: Nineteen vials of tetrofosmin were radiolabelled with 99m Tc and the percentages of the RCP were determined. Five different methods were compared with the standard RCP testing method (ITLC - SG, 2x20 cm): Whatman 3MM (1x10 cm) with acetone and dichloro - methane (method 1); Whatman 3MM (1x1 0 cm) with ethyl acetate (method 2); aluminum oxide - coated plastic thin - layer chromatography (TLC) plate (1x10 cm) and ethanol (method 3); Whatman 3MM (2x20 cm) with acetone and dichloro - methane (method 4); solid - phase extraction method C18 cartridge (meth od 5). RESULTS: The average values of RCP were 95,30% ± 1,28% (method 1), 93,95 ± 0,61% (method 2), 96,85% ± 0,93% (method 3), 92,94% ± 0,99% (method 4) and 96,25% ± 2,57% (method 5) (n=12 each), and 93,15% ± 1,13% for the standard method (n=19). There we re statistical significant differences in the values obtained for methods 1 (P=0,001), 3 (P=0,000) and 5 (P=0,004), and there were no statistical significant differences in the values obtained for methods 2 (P=0,113) and 4 (P=0,327). CONCLUSION: From the results obtained, methods 2 and 4 showed a higher correlation with the standard method. Unlike method 4, method 2 is less time - consuming than the reference method and can overcome the problems associated with the solvent toxicity. The remaining methods (1, 3 and 5) tended to overestimate RCP value compared to the standard method.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
Introdução: A sequência de movimento de sentado para de pé (SPP) exige um elevado controlo postural (CP). Em indivíduos com doença de Parkinson (DP), os circuitos que envolvem os ajustes posturais antecipatórios (APA’s) parecem estar afetados, refletindo-se numa diminuição do CP com repercussões nesta sequência de movimento. Objetivo: Avaliar o comportamento dos APA’s na tibio-társica na sequência de movimento SPP em indivíduos com DP. Métodos: Recorreu-se ao estudo de 4 casos com DP, com tempo de evolução entre os 3 e 17 anos, objeto de uma intervenção de fisioterapia baseada nos princípios do Conceito de Bobath durante 12 semanas. Antes (M0) e após (M1) a intervenção procedeu-se ao registo eletromiográfico dos músculos tibial anterior (TA) e solear (SOL) bilateralmente e durante a sequência de SPP. Adicionalmente foram também utilizadas a Escala de Equilíbrio de Berg, a Modified Falls Efficacy Scale (MFES) e a Classificação Internacional de Funcionalidade (CIF), para, indiretamente, averiguar o impacto funcional da reorganização dos APA’s. Resultados: Em M0 os resultados sugerem uma diminuição APA’s, uma vez que se observou: 1) diferentes tempos de ativação do TA e do SOL entre membros e 2) uma ativação prévia do SOL ao TA para os participantes A, C e D. Em M1, observou-se uma aproximação ao comtemplado para os APA’s para a maioria dos indivíduos. Os resultados na escala de Berg e MFES, de M0 para M1, sugerem um aumento do equilíbrio e da capacidade de confiança na maioria dos participantes (A, 21/42 pontos, B manteve a pontuação final 31 pontos, C, 50/54 pontos e D 45/53 pontos na escala de Berg; A, 30/43 pontos, B, 21/18 pontos, C, 70/68 pontos e D, 40/64 pontos na MFES;). Também se observaram melhorias nas atividades e participação da CIF. Conclusão: nos indivíduos em estudo verificou-se, de uma forma geral, uma modificação no sentido da aproximação do período comtemplado para os APA’s, em M1. Nos sujeitos A, C, e D verificou-se uma modificação do tempo de activação do SOL em função da actividade do TA em M1. No individuo B, à esquerda não se verificou o mesmo comportamento, verificou-se a activação inversa do SOL ao TA.
Resumo:
A presente dissertação apresenta uma solução para o problema de modelização tridimensional de galerias subterrâneas. O trabalho desenvolvido emprega técnicas provenientes da área da robótica móvel para obtenção um sistema autónomo móvel de modelização, capaz de operar em ambientes não estruturados sem acesso a sistemas de posicionamento global, designadamente GPS. Um sistema de modelização móvel e autónomo pode ser bastante vantajoso, pois constitui um método rápido e simples de monitorização das estruturas e criação de representações virtuais das galerias com um elevado nível de detalhe. O sistema de modelização desloca-se no interior dos túneis para recolher informações sensoriais sobre a geometria da estrutura. A tarefa de organização destes dados com vista _a construção de um modelo coerente, exige um conhecimento exacto do percurso praticado pelo sistema, logo o problema de localização da plataforma sensorial tem que ser resolvido. A formulação de um sistema de localização autónoma tem que superar obstáculos que se manifestam vincadamente nos ambientes underground, tais como a monotonia estrutural e a já referida ausência de sistemas de posicionamento global. Neste contexto, foi abordado o conceito de SLAM (Simultaneous Loacalization and Mapping) para determinação da localização da plataforma sensorial em seis graus de liberdade. Seguindo a abordagem tradicional, o núcleo do algoritmo de SLAM consiste no filtro de Kalman estendido (EKF { Extended Kalman Filter ). O sistema proposto incorpora métodos avançados do estado da arte, designadamente a parametrização em profundidade inversa (Inverse Depth Parametrization) e o método de rejeição de outliers 1-Point RANSAC. A contribuição mais importante do método por nós proposto para o avanço do estado da arte foi a fusão da informação visual com a informação inercial. O algoritmo de localização foi testado com base em dados reais, adquiridos no interior de um túnel rodoviário. Os resultados obtidos permitem concluir que, ao fundir medidas inerciais com informações visuais, conseguimos evitar o fenómeno de degeneração do factor de escala, comum nas aplicações de localização através de sistemas puramente monoculares. Provámos simultaneamente que a correcção de um sistema de localização inercial através da consideração de informações visuais é eficaz, pois permite suprimir os desvios de trajectória que caracterizam os sistemas de dead reckoning. O algoritmo de modelização, com base na localização estimada, organiza no espaço tridimensional os dados geométricos adquiridos, resultando deste processo um modelo em nuvem de pontos, que posteriormente _e convertido numa malha triangular, atingindo-se assim uma representação mais realista do cenário original.
Resumo:
A square-wave voltammetric (SWV) method and a flow injection analysis system with amperometric detection were developed for the determination of tramadol hydrochloride. The SWV method enables the determination of tramadol over the concentration range of 15-75 µM with a detection limit of 2.2 µM. Tramadol could be determined in concentrations between 9 and 50 µM at a sampling rate of 90 h-1, with a detection limit of 1.7 µM using the flow injection system. The electrochemical methods developed were successfully applied to the determination of tramadol in pharmaceutical dosage forms, without any pre-treatment of the samples. Recovery trials were performed to assess the accuracy of the results; the values were between 97 and 102% for both methods.