921 resultados para Simulated defoliation
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The introduction of electricity markets and integration of Distributed Generation (DG) have been influencing the power system’s structure change. Recently, the smart grid concept has been introduced, to guarantee a more efficient operation of the power system using the advantages of this new paradigm. Basically, a smart grid is a structure that integrates different players, considering constant communication between them to improve power system operation and management. One of the players revealing a big importance in this context is the Virtual Power Player (VPP). In the transportation sector the Electric Vehicle (EV) is arising as an alternative to conventional vehicles propel by fossil fuels. The power system can benefit from this massive introduction of EVs, taking advantage on EVs’ ability to connect to the electric network to charge, and on the future expectation of EVs ability to discharge to the network using the Vehicle-to-Grid (V2G) capacity. This thesis proposes alternative strategies to control these two EV modes with the objective of enhancing the management of the power system. Moreover, power system must ensure the trips of EVs that will be connected to the electric network. The EV user specifies a certain amount of energy that will be necessary to charge, in order to ensure the distance to travel. The introduction of EVs in the power system turns the Energy Resource Management (ERM) under a smart grid environment, into a complex problem that can take several minutes or hours to reach the optimal solution. Adequate optimization techniques are required to accommodate this kind of complexity while solving the ERM problem in a reasonable execution time. This thesis presents a tool that solves the ERM considering the intensive use of EVs in the smart grid context. The objective is to obtain the minimum cost of ERM considering: the operation cost of DG, the cost of the energy acquired to external suppliers, the EV users payments and remuneration and penalty costs. This tool is directed to VPPs that manage specific network areas, where a high penetration level of EVs is expected to be connected in these areas. The ERM is solved using two methodologies: the adaptation of a deterministic technique proposed in a previous work, and the adaptation of the Simulated Annealing (SA) technique. With the purpose of improving the SA performance for this case, three heuristics are additionally proposed, taking advantage on the particularities and specificities of an ERM with these characteristics. A set of case studies are presented in this thesis, considering a 32 bus distribution network and up to 3000 EVs. The first case study solves the scheduling without considering EVs, to be used as a reference case for comparisons with the proposed approaches. The second case study evaluates the complexity of the ERM with the integration of EVs. The third case study evaluates the performance of scheduling with different control modes for EVs. These control modes, combined with the proposed SA approach and with the developed heuristics, aim at improving the quality of the ERM, while reducing drastically its execution time. The proposed control modes are: uncoordinated charging, smart charging and V2G capability. The fourth and final case study presents the ERM approach applied to consecutive days.
Resumo:
OBJECTIVE: To estimate the basic reproduction number (R0) of dengue fever including both imported and autochthonous cases. METHODS: The study was conducted based on epidemiological data of the 2003 dengue epidemic in Brasília, Brazil. The basic reproduction number is estimated from the epidemic curve, fitting linearly the increase of initial cases. Aiming at simulating an epidemic with both autochthonous and imported cases, a "susceptible-infectious-resistant" compartmental model was designed, in which the imported cases were considered as an external forcing. The ratio between R0 of imported versus autochthonous cases was used as an estimator of real R0. RESULTS: The comparison of both reproduction numbers (only autochthonous versus all cases) showed that considering all cases as autochthonous yielded a R0 above one, although the real R0 was below one. The same results were seen when the method was applied on simulated epidemics with fixed R0. This method was also compared to some previous proposed methods by other authors and showed that the latter underestimated R0 values. CONCLUSIONS: It was shown that the inclusion of both imported and autochthonous cases is crucial for the modeling of the epidemic dynamics, and thus provides critical information for decision makers in charge of prevention and control of this disease.
Resumo:
An Electrocardiogram (ECG) monitoring system deals with several challenges related with noise sources. The main goal of this text was the study of Adaptive Signal Processing Algorithms for ECG noise reduction when applied to real signals. This document presents an adaptive ltering technique based on Least Mean Square (LMS) algorithm to remove the artefacts caused by electromyography (EMG) and power line noise into ECG signal. For this experiments it was used real noise signals, mainly to observe the di erence between real noise and simulated noise sources. It was obtained very good results due to the ability of noise removing that can be reached with this technique. A recolha de sinais electrocardiogr a cos (ECG) sofre de diversos problemas relacionados com ru dos. O objectivo deste trabalho foi o estudo de algoritmos adaptativos para processamento digital de sinal, para redu c~ao de ru do em sinais ECG reais. Este texto apresenta uma t ecnica de redu c~ao de ru do baseada no algoritmo Least Mean Square (LMS) para remo c~ao de ru dos causados quer pela actividade muscular (EMG) quer por ru dos causados pela rede de energia el ectrica. Para as experiencias foram utilizados ru dos reais, principalmente para aferir a diferen ca de performance do algoritmo entre os sinais reais e os simulados. Foram conseguidos bons resultados, essencialmente devido as excelentes caracter sticas que esta t ecnica tem para remover ru dos.
Resumo:
Dissertação de Mestrado, Ciências Biomédicas, 18 de Março de 2016, Universidade dos Açores.
Resumo:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
Proceedings of International Conference - SPIE 7477, Image and Signal Processing for Remote Sensing XV - 28 September 2009
Resumo:
O presente estudo tem como objetivo comparar experimentalmente duas crianças praticantes de Hóquei em Patins, uma normal e uma com a patologia dos joelhos valgos, avaliando qualitativamente as diferenças posturais, estáticas e dinâmicas, decorrentes da utilização dos patins específicos desta modalidade, através do sistema de análise da Força de Reação do Solo (FRS), de Eletromiografia (EMG), de captura de movimento, e de modelação e simulação. Para atingir o objetivo definiu-se um protocolo de ensaios com as seguintes tarefas: repouso com e sem patins, marcha, corrida, deslizar com os dois pés apoiados e deslizar com o pé esquerdo levantado. No repouso avaliou-se a variação do ponto de aplicação da FRS da criança normal e patológica, com e sem patins. Ainda na tarefa de repouso avaliou-se também as componentes médio-lateral, antero-posterior individualmente e a componente vertical da FRS, juntamente com a atividade muscular dos músculos Gastrocnémio Medial (GM), Recto Femoral (RF), Vasto Medial (VM), Vasto Lateral (VL), Bicípete Femoral (BF), Semitendinoso (ST), Tensor da Fascia Lata (TFL), Gastrocnémio Lateral (GL), de forma a comparar os valores de intensidade de FRS e da atividade muscular dos diferentes instantes de tempo desta tarefa. Para as restantes tarefas apenas se avaliou individualmente as componentes médio-lateral e antero-posterior da FRS e a componente vertical da FRS juntamente com a atividade muscular dos referidos músculos, salientando as diferenças evidentes entre as curvas da criança normal e as curvas da criança patológica durante os diferentes instantes do movimento. Todas as tarefas referidas, exceto a tarefa de repouso com patins, foram ainda simuladas recorrendo a modelos músculo-esqueléticos. A partir destas simulações do movimento obtiveram-se os ângulos articulares e efetuou-se a respetiva análise. No final dos resultados obtidos apresentou-se uma tabela de resumo com o cálculo dos coeficientes de variação de cada grandeza, exceto nos gráficos da posição no espaço da FRS, onde se constatou que existe uma grande variabilidade inter-individuo em cada tarefa. A análise dos resultados de cada tarefa permite concluir que a utilização de patins pode trazer uma maior ativação muscular para a criança patológica, embora se verifique instabilidade articular. Apesar dessa instabilidade pode-se inferir que, uma maior ativação muscular decorrente da utilização de patins, tal como acontece na prática do hóquei em patins, pode trazer uma melhoria, a longo prazo, na estabilidade da articulação do joelho e na sustentação corporal, proporcionada pelo fortalecimento muscular.
Resumo:
Este trabalho de pesquisa e desenvolvimento tem como fundamento principal o Conceito de Controlo por Lógica Difusa. Utilizando as ferramentas do software Matlab, foi possível desenvolver um controlador com base na inferência difusa que permitisse controlar qualquer tipo de sistema físico real, independentemente das suas características. O Controlo Lógico Difuso, do inglês “Fuzzy Control”, é um tipo de controlo muito particular, pois permite o uso simultâneo de dados numéricos com variáveis linguísticas que tem por base o conhecimento heurístico dos sistemas a controlar. Desta forma, consegue-se quantificar, por exemplo, se um copo está “meio cheio” ou “meio vazio”, se uma pessoa é “alta” ou “baixa”, se está “frio” ou “muito frio”. O controlo PID é, sem dúvida alguma, o controlador mais amplamente utilizado no controlo de sistemas. Devido à sua simplicidade de construção, aos reduzidos custos de aplicação e manutenção e aos resultados que se obtêm, este controlador torna-se a primeira opção quando se pretende implementar uma malha de controlo num determinado sistema. Caracterizado por três parâmetros de ajuste, a saber componente proporcional, integral e derivativa, as três em conjunto permitem uma sintonia eficaz de qualquer tipo de sistema. De forma a automatizar o processo de sintonia de controladores e, aproveitando o que melhor oferece o Controlo Difuso e o Controlo PID, agrupou-se os dois controladores, onde em conjunto, como poderemos constatar mais adiante, foram obtidos resultados que vão de encontro com os objectivos traçados. Com o auxílio do simulink do Matlab, foi desenvolvido o diagrama de blocos do sistema de controlo, onde o controlador difuso tem a tarefa de supervisionar a resposta do controlador PID, corrigindo-a ao longo do tempo de simulação. O controlador desenvolvido é denominado por Controlador FuzzyPID. Durante o desenvolvimento prático do trabalho, foi simulada a resposta de diversos sistemas à entrada em degrau unitário. Os sistemas estudados são na sua maioria sistemas físicos reais, que representam sistemas mecânicos, térmicos, pneumáticos, eléctricos, etc., e que podem ser facilmente descritos por funções de transferência de primeira, segunda e de ordem superior, com e sem atraso.