957 resultados para Real data
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
This paper proposes an implementation, based on a multi-agent system, of a management system for automated negotiation of electricity allocation for charging electric vehicles (EVs) and simulates its performance. The widespread existence of charging infrastructures capable of autonomous operation is recognised as a major driver towards the mass adoption of EVs by mobility consumers. Eventually, conflicting requirements from both power grid and EV owners require automated middleman aggregator agents to intermediate all operations, for example, bidding and negotiation, between these parts. Multi-agent systems are designed to provide distributed, modular, coordinated and collaborative management systems; therefore, they seem suitable to address the management of such complex charging infrastructures. Our solution consists in the implementation of virtual agents to be integrated into the management software of a charging infrastructure. We start by modelling the multi-agent architecture using a federated, hierarchical layers setup and as well as the agents' behaviours and interactions. Each of these layers comprises several components, for example, data bases, decision-making and auction mechanisms. The implementation of multi-agent platform and auctions rules, and of models for battery dynamics, is also addressed. Four scenarios were predefined to assess the management system performance under real usage conditions, considering different types of profiles for EVs owners', different infrastructure configurations and usage and different loads on the utility grid (where real data from the concession holder of the Portuguese electricity transmission grid is used). Simulations carried with the four scenarios validate the performance of the modelled system while complying with all the requirements. Although all of these have been performed for one charging station alone, a multi-agent design may in the future be used for the higher level problem of distributing energy among charging stations. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.
Resumo:
The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.
Resumo:
This paper introduces a new hyperspectral unmixing method called Dependent Component Analysis (DECA). This method decomposes a hyperspectral image into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA performance is illustrated using simulated and real data.
Resumo:
Electricity Markets are not only a new reality but an evolving one as the involved players and rules change at a relatively high rate. Multi-agent simulation combined with Artificial Intelligence techniques may result in very helpful sophisticated tools. This paper presents a new methodology for the management of coalitions in electricity markets. This approach is tested using the multi-agent market simulator MASCEM (Multi-Agent Simulator of Competitive Electricity Markets), taking advantage of its ability to provide the means to model and simulate Virtual Power Players (VPP). VPPs are represented as coalitions of agents, with the capability of negotiating both in the market and internally, with their members in order to combine and manage their individual specific characteristics and goals, with the strategy and objectives of the VPP itself. A case study using real data from the Iberian Electricity Market is performed to validate and illustrate the proposed approach.
Resumo:
This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to electricity market negotiating players. ALBidS uses a set of different strategies for providing decision support to market players. These strategies are used accordingly to their probability of success for each different context. The approach proposed in this paper uses a Bayesian network for deciding the most probably successful action at each time, depending on past events. The performance of the proposed methodology is tested using electricity market simulations in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets). MASCEM provides the means for simulating a real electricity market environment, based on real data from real electricity market operators.
Resumo:
Demand response is an energy resource that has gained increasing importance in the context of competitive electricity markets and of smart grids. New business models and methods designed to integrate demand response in electricity markets and of smart grids have been published, reporting the need of additional work in this field. In order to adequately remunerate the participation of the consumers in demand response programs, improved consumers’ performance evaluation methods are needed. The methodology proposed in the present paper determines the characterization of the baseline approach that better fits the consumer historic consumption, in order to determine the expected consumption in absent of participation in a demand response event and then determine the actual consumption reduction. The defined baseline can then be used to better determine the remuneration of the consumer. The paper includes a case study with real data to illustrate the application of the proposed methodology.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The dynamism and ongoing changes that the electricity markets sector is constantly suffering, enhanced by the huge increase in competitiveness, create the need of using simulation platforms to support operators, regulators, and the involved players in understanding and dealing with this complex environment. This paper presents an enhanced electricity market simulator, based on multi-agent technology, which provides an advanced simulation framework for the study of real electricity markets operation, and the interactions between the involved players. MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) uses real data for the creation of realistic simulation scenarios, which allow the study of the impacts and implications that electricity markets transformations bring to different countries. Also, the development of an upper-ontology to support the communication between participating agents, provides the means for the integration of this simulator with other frameworks, such as MAN-REM (Multi-Agent Negotiation and Risk Management in Electricity Markets). A case study using the enhanced simulation platform that results from the integration of several systems and different tools is presented, with a scenario based on real data, simulating the MIBEL electricity market environment, and comparing the simulation performance with the real electricity market results.
Resumo:
Dissertation to Obtain the Degree of Master in Biomedical Engineering
Resumo:
Num mercado globalizado, a procura contínua de vantagens competitivas é um fator crucial para o sucesso das organizações. A melhoria contínua dos processos é uma abordagem usual, uma vez que os resultados destas melhorias vão se traduzir diretamente na qualidade dos produtos. Neste contexto, a metodologia Failure Mode Effect Analysis (FMEA) é muito utilizada, especialmente pelas suas características proactivas, que permitem a identificação e a prevenção de erros do processo. Assim, quanto mais eficaz for a aplicação desta ferramenta, mais benefícios terá a organização. Assim, quando é utilizado com eficácia, o FMEA de Processo, além de ser um método poderoso na análise do processo, permite a melhoria contínua e a redução dos custos [1] . Este trabalho de dissertação teve como objetivo avaliar a eficácia da utilização da ferramenta do FMEA de processo numa organização certificada segundo a norma ISO/TS16949. A metodologia proposta passa pela análise de dados reais, ou seja, comparar as falhas verificadas no mercado com as falhas que tinham sido identificadas no FMEA. Assim, ao analisar o nível de falhas identificadas e não identificadas durante o FMEA e a projeção dessas falhas no mercado, consegue-se determinar se o FMEA foi mais ou menos eficaz, e ainda, identificar fatores que condicionam a melhor utilização da mesma. Este estudo, está organizado em três fases, a primeira apresenta a metodologia proposta , com a definição de um fluxograma do processo de avaliação e as métricas usadas, a segunda fase a aplicação do modelo proposto a dois casos de estudo, e uma última fase, que consiste na análise comparativa, individual e global, que visa, além de comparar esultados, identificar pontos fracos durante a execução do FMEA. Os resultados do caso de estudo indicam que a ferramenta do FMEA tem sido usada com eficácia, pois consegue-se identificar uma quantidade significativa de falhas potenciais e evitá-las. No entanto, existem ainda falhas que não foram identificadas no FMEA e que apareceram no cliente, e ainda, algumas falhas que foram identificadas e apareceram no cliente. As falhas traduzem-se em má qualidade e custos para o negócio, pelo que são propostas ações de melhoria. Pode-se concluir que uma boa utilização do FMEA pode ser um fator importante para a qualidade do serviço ao cliente, e ainda, com impacto dos custos.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.