993 resultados para Restorable load estimation
Resumo:
This paper extents the by now classic sensor fusion complementary filter (CF) design, involving two sensors, to the case where three sensors that provide measurements in different bands are available. This paper shows that the use of classical CF techniques to tackle a generic three sensors fusion problem, based solely on their frequency domain characteristics, leads to a minimal realization, stable, sub-optimal solution, denoted as Complementary Filters3 (CF3). Then, a new approach for the estimation problem at hand is used, based on optimal linear Kalman filtering techniques. Moreover, the solution is shown to preserve the complementary property, i.e. the sum of the three transfer functions of the respective sensors add up to one, both in continuous and discrete time domains. This new class of filters are denoted as Complementary Kalman Filters3 (CKF3). The attitude estimation of a mobile robot is addressed, based on data from a rate gyroscope, a digital compass, and odometry. The experimental results obtained are reported.
Resumo:
This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.
Resumo:
This paper addresses the estimation of surfaces from a set of 3D points using the unified framework described in [1]. This framework proposes the use of competitive learning for curve estimation, i.e., a set of points is defined on a deformable curve and they all compete to represent the available data. This paper extends the use of the unified framework to surface estimation. It o shown that competitive learning performes better than snakes, improving the model performance in the presence of concavities and allowing to desciminate close surfaces. The proposed model is evaluated in this paper using syntheticdata and medical images (MRI and ultrasound images).
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.
Resumo:
A sustentabilidade energética do planeta é uma preocupação corrente e, neste sentido, a eficiência energética afigura-se como sendo essencial para a redução do consumo em todos os setores de atividade. No que diz respeito ao setor residencial, o indevido comportamento dos utilizadores aliado ao desconhecimento do consumo dos diversos aparelhos, são factores impeditivos para a redução do consumo energético. Uma ferramenta importante, neste sentido, é a monitorização de consumos nomeadamente a monitorização não intrusiva, que apresenta vantagens económicas relativamente à monitorização intrusiva, embora levante alguns desafios na desagregação de cargas. Abordou-se então, neste documento, a temática da monitorização não intrusiva onde se desenvolveu uma ferramenta de desagregação de cargas residenciais, sobretudo de aparelhos que apresentavam elevados consumos. Para isso, monitorizaram-se os consumos agregados de energia elétrica, água e gás de seis habitações do município de Vila Nova de Gaia. Através da incorporação dos vetores de água e gás, a acrescentar ao da energia elétrica, provou-se que a performance do algoritmo de desagregação de aparelhos poderá aumentar, no caso de aparelhos que utilizem simultaneamente energia elétrica e água ou energia elétrica e gás. A eficiência energética é também parte constituinte deste trabalho e, para tal, implementaram-se medidas de eficiência energética para uma das habitações em estudo, de forma a concluir as que exibiam maior potencial de poupança, assim como rápidos períodos de retorno de investimento. De um modo geral, os objetivos propostos foram alcançados e espera-se que num futuro próximo, a monitorização de consumos não intrusiva se apresente como uma solução de referência no que respeita à sustentabilidade energética do setor residencial.
Resumo:
Recent data suggest that the clinical course of reactional states in leprosy is closely related to the cytokine profile released locally or systemically by the patients. In the present study, patients with erythema nodosum leprosum (ENL) were grouped according to the intensity of their clinical symptoms. Clinical and immunological aspects of ENL and the impact of these parameters on bacterial load were assessed in conjunction with patients' in vitro immune response to mycobacterial antigens. In 10 out of the 17 patients tested, BI (bacterial index) was reduced by at least 1 log from leprosy diagnosis to the onset of their first reactional episode (ENL), as compared to an expected 0.3 log reduction in the unreactional group for the same MDT (multidrug therapy) period. However, no difference in the rate of BI reduction was noted at the end of MDT among ENL and unreactional lepromatous patients. Accordingly, although TNF-alpha (tumor necrosis factor) levels were enhanced in the sera of 70.6% of the ENL patients tested, no relationship was noted between circulating TNF-alpha levels and the decrease in BI detected at the onset of the reactional episode. Evaluation of bacterial viability of M. leprae isolated from the reactional lesions showed no growth in the mouse footpads. Only 20% of the patients demonstrated specific immune response to M. leprae during ENL. Moreover, high levels of soluble IL-2R (interleukin-2 receptor) were present in 78% of the patients. Circulating anti-neural (anti-ceramide and anti-galactocerebroside antibodies) and anti-mycobacterial antibodies were detected in ENL patients' sera as well, which were not related to the clinical course of disease. Our data suggest that bacterial killing is enhanced during reactions. Emergence of specific immune response to M. leprae and the effective role of TNF-alpha in mediating fragmentation of bacteria still need to be clarified.
Resumo:
This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.
Resumo:
The deregulation of electricity markets has diversified the range of financial transaction modes between independent system operator (ISO), generation companies (GENCO) and load-serving entities (LSE) as the main interacting players of a day-ahead market (DAM). LSEs sell electricity to end-users and retail customers. The LSE that owns distributed generation (DG) or energy storage units can supply part of its serving loads when the nodal price of electricity rises. This opportunity stimulates them to have storage or generation facilities at the buses with higher locational marginal prices (LMP). The short-term advantage of this model is reducing the risk of financial losses for LSEs in DAMs and its long-term benefit for the LSEs and the whole system is market power mitigation by virtually increasing the price elasticity of demand. This model also enables the LSEs to manage the financial risks with a stochastic programming framework.
Resumo:
The use of demand response programs enables the adequate use of resources of small and medium players, bringing high benefits to the smart grid, and increasing its efficiency. One of the difficulties to proceed with this paradigm is the lack of intelligence in the management of small and medium size players. In order to make demand response programs a feasible solution, it is essential that small and medium players have an efficient energy management and a fair optimization mechanism to decrease the consumption without heavy loss of comfort, making it acceptable for the users. This paper addresses the application of real-time pricing in a house that uses an intelligent optimization module involving artificial neural networks.
Resumo:
Load forecasting has gradually becoming a major field of research in electricity industry. Therefore, Load forecasting is extremely important for the electric sector under deregulated environment as it provides a useful support to the power system management. Accurate power load forecasting models are required to the operation and planning of a utility company, and they have received increasing attention from researches of this field study. Many mathematical methods have been developed for load forecasting. This work aims to develop and implement a load forecasting method for short-term load forecasting (STLF), based on Holt-Winters exponential smoothing and an artificial neural network (ANN). One of the main contributions of this paper is the application of Holt-Winters exponential smoothing approach to the forecasting problem and, as an evaluation of the past forecasting work, data mining techniques are also applied to short-term Load forecasting. Both ANN and Holt-Winters exponential smoothing approaches are compared and evaluated.
Resumo:
In competitive electricity markets it is necessary for a profit-seeking load-serving entity (LSE) to optimally adjust the financial incentives offering the end users that buy electricity at regulated rates to reduce the consumption during high market prices. The LSE in this model manages the demand response (DR) by offering financial incentives to retail customers, in order to maximize its expected profit and reduce the risk of market power experience. The stochastic formulation is implemented into a test system where a number of loads are supplied through LSEs.
Resumo:
Demand response is an energy resource that has gained increasing importance in the context of competitive electricity markets and of smart grids. New business models and methods designed to integrate demand response in electricity markets and of smart grids have been published, reporting the need of additional work in this field. In order to adequately remunerate the participation of the consumers in demand response programs, improved consumers’ performance evaluation methods are needed. The methodology proposed in the present paper determines the characterization of the baseline approach that better fits the consumer historic consumption, in order to determine the expected consumption in absent of participation in a demand response event and then determine the actual consumption reduction. The defined baseline can then be used to better determine the remuneration of the consumer. The paper includes a case study with real data to illustrate the application of the proposed methodology.