906 resultados para renewal capability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A vigilância de efeitos indesejáveis após a vacinação é complexa. Existem vários actores de confundimento que podem dar origem a associações espúrias, meramente temporais mas que podem provocar uma percepção do risco alterada e uma consequente desconfiança generalizada acerca do uso das vacinas. Com efeito as vacinas são medicamentos complexos com características únicas cuja vigilância necessita de abordagens metodológicas desenvolvidas para esse propósito. Do exposto se entende que, desde o desenvolvimento da farmacovigilância se tem procurado desenvolver novas metodologias que sejam concomitantes aos Sistemas de Notificação Espontânea que já existem. Neste trabalho propusemo-nos a desenvolver e testar um modelo de vigilância de reacções adversas a vacinas, baseado na auto-declaração pelo utente de eventos ocorridos após a vacinação e testar a capacidade de gerar sinais aplicando cálculos de desproporção a datamining. Para esse efeito foi constituída uma coorte não controlada de utentes vacinados em Centros de Saúde que foram seguidos durante quinze dias. A recolha de eventos adversos a vacinas foi efectuada pelos próprios utentes através de um diário de registo. Os dados recolhidos foram objecto de análise descritiva e análise de data-mining utilizando os cálculos Proportional Reporting Ratio e o Information Component. A metodologia utilizada permitiu gerar um corpo de evidência suficiente para a geração de sinais. Tendo sido gerados quatro sinais. No âmbito do data-mining a utilização do Information Component como método de geração de sinais parece aumentar a eficiência científica ao permitir reduzir o número de ocorrências até detecção de sinal. A informação reportada pelos utentes parece válida como indicador de sinais de reacções adversas não graves, o que permitiu o registo de eventos sem incluir o viés da avaliação da relação causal pelo notificador. Os principais eventos reportados foram eventos adversos locais (62,7%) e febre (31,4%).------------------------------------------ABSTRACT: The monitoring of undesirable effects following vaccination is complex. There are several confounding factors that can lead to merely temporal but spurious associations that can cause a change in the risk perception and a consequent generalized distrust about the safe use of vaccines. Indeed, vaccines are complex drugs with unique characteristics so that its monitoring requires specifically designed methodological approaches. From the above-cited it is understandable that since the development of Pharmacovigilance there has been a drive for the development of new methodologies that are concomitant with Spontaneous Reporting Systems already in place. We proposed to develop and test a new model for vaccine adverse reaction monitoring, based on self-report by users of events following vaccination and to test its capability to generate disproportionality signals applying quantitative methods of signal generation to data-mining. For that effect we set up an uncontrolled cohort of users vaccinated in Healthcare Centers,with a follow-up period of fifteen days. Adverse vaccine events we registered by the users themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. The methodology we used allowed for the generation of a sufficient body of evidence for signal generation. Four signals were generated. Regarding the data-mining, the use of Information Component as a method for generating disproportionality signals seems to increase scientific efficiency by reducing the number of events needed to signal detection. The information reported by users seems valid as an indicator of non serious adverse vaccine reactions, allowing for the registry of events without the bias of the evaluation of the casual relation by the reporter. The main adverse events reported were injection site reactions (62,7%) and fever (31,4%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The capability to anticipate a contact with another device can greatly improve the performance and user satisfaction not only of mobile social network applications but of any other relying on some form of data harvesting or hoarding. One of the most promising approaches for contact prediction is to extrapolate from past experiences. This paper investigates the recurring contact patterns observed between groups of devices using an 8-year dataset of wireless access logs produced by more than 70000 devices. This effort permitted to model the probabilities of occurrence of a contact at a predefined date between groups of devices using a power law distribution that varies according to neighbourhood size and recurrence period. In the general case, the model can be used by applications that need to disseminate large datasets by groups of devices. As an example, the paper presents and evaluates an algorithm that provides daily contact predictions, based on the history of past pairwise contacts and their duration. Copyright © 2015 ICST.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos últimos anos, o processo de ensino e aprendizagem tem sofrido significativas alterações graças ao aparecimento da Internet. Novas ferramentas para apoio ao ensino têm surgido, nas quais se destacam os laboratórios remotos. Atualmente, muitas instituições de ensino disponibilizam laboratórios remotos nos seus cursos, que permitem, a professores e alunos, a realização de experiências reais através da Internet. Estes são implementados por diferentes arquiteturas e infraestruturas, suportados por vários módulos de laboratório acessíveis remotamente (e.g. instrumentos de medição). No entanto, a sua inclusão no ensino é ainda deficitária, devido: i) à falta de meios e competências técnicas das instituições de ensino para os desenvolverem, ii) à dificuldade na partilha dos módulos de laboratório por diferentes infraestruturas e, iii) à reduzida capacidade de os reconfigurar com esses módulos. Para ultrapassar estas limitações, foi idealizado e desenvolvido no âmbito de um trabalho de doutoramento [1] um protótipo, cuja arquitetura é baseada na norma IEEE 1451.0 e na tecnologia de FPGAs. Para além de garantir o desenvolvimento e o acesso de forma normalizada a um laboratório remoto, este protótipo promove ainda a partilha de módulos de laboratório por diferentes infraestruturas. Nesse trabalho explorou-se a capacidade de reconfiguração de FPGAs para embutir na infraestrutura do laboratório vários módulos, todos descritos em ficheiros, utilizando linguagens de descrição de hardware estruturados de acordo com a norma IEEE 1451.0. A definição desses módulos obriga à criação de estruturas de dados binárias (Transducer Electronic Data Sheets, TEDSs), bem como de outros ficheiros que possibilitam a sua interligação com a infraestrutura do laboratório. No entanto, a criação destes ficheiros é bastante complexa, uma vez que exige a realização de vários cálculos e conversões. Tendo em consideração essa mesma complexidade, esta dissertação descreve o desenvolvimento de uma aplicação Web para leitura e escrita dos TEDSs. Para além de um estudo sobre os laboratórios remotos, é efetuada uma descrição da norma IEEE 1451.0, com particular atenção para a sua arquitetura e para a estrutura dos diferentes TEDSs. Com o objetivo de enquadrar a aplicação desenvolvida, efetua-se ainda uma breve apresentação de um protótipo de um laboratório remoto reconfigurável, cuja reconfiguração é apoiada por esta aplicação. Por fim, é descrita a verificação da aplicação Web, de forma a tirar conclusões sobre o seu contributo para a simplificação dessa reconfiguração.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation presented to obtain a Master degree in Biotechnology at the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os consumidores finais são vistos, no novo paradigma da operação das redes elétricas, como intervenientes ativos com capacidade para gerir os seus recursos energéticos, nomeadamente as cargas, as unidades de produção, os veículos elétricos e a participação em eventos de Demand Response. Tem sido evidente um aumento do consumo de energia, sendo que o setor residencial representa uma importante parte do consumo global dos países desenvolvidos. Para que a participação ativa dos consumidores seja possível, várias abordagens têm vindo a ser propostas, com ênfase nas Smart Grids e nas Microgrids. Diversos sistemas têm sido propostos e desenvolvidos com o intuito de tornar a operação dos sistemas elétricos mais flexível. Neste contexto, os sistemas de gestão de instalações domésticas apresentam-se como um elemento fulcral para a participação ativa dos consumidores na gestão energética, permitindo aos operadores de sistema coordenarem a produção mas também a procura. No entanto, é importante identificar as vantagens da implementação e uso de sistemas de gestão de energia elétrica para os consumidores finais. Nesta dissertação são propostas metodologias de apoio ao consumidor doméstico na gestão dos recursos energéticos existentes e a implementação das mesmas na plataforma de simulação de um sistema de gestão de energia desenvolvido para consumidores domésticos, o SCADA House Intelligent Management (SHIM). Para tal, foi desenvolvida uma interface que permite a simulação em laboratório do sistema de gestão desenvolvido. Adicionalmente, o SHIM foi incluído no simulador Multi-Agent Smart Grid Simulation Plataform (MASGriP) permitindo a simulação de cenários considerando diferentes agentes. Ao nível das metodologias desenvolvidas são propostos diferentes algoritmos de gestão dos recursos energéticos existentes numa habitação, considerando utilizadores com diferentes tipos de recursos (cargas; cargas e veículos elétricos; cargas, veículos elétricos e microgeração). Adicionalmente é proposto um método de gestão dinâmica das cargas para eventos de Demand Response de longa duração, considerando as características técnicas dos equipamentos. Nesta dissertação são apresentados cinco casos de estudos em que cada um deles tem diferentes cenários de simulação. Estes casos de estudos são importantes para verificar a viabilidade da implementação das metodologias propostas para o SHIM. Adicionalmente são apresentados na dissertação perfis reais dos vários recursos energéticos e de consumidores domésticos que são, posteriormente, utilizados para o desenvolvimento dos casos de estudo e aplicação das metodologias.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O primeiro objetivo deste relatório, que adiante se desenvolve, é apresentar o trabalho realizado para a obtenção de conhecimentos que justifiquem a atribuição do grau de mestre em engenharia, no ramo das construções. O estágio, que a este trabalho dá sustentação, foi feito na empresa Metaloviana, onde foi possível acompanhar as várias fases de conceção, fabrico e montagem de um teto falso, designadamente na sala de comando da central Venda Nova III, de aproveitamento hidroelétrico. Tendo Portugal objetivos cada vez mais ambiciosos na utilização de energias renováveis, aproveitando, entre outros, os recursos hídricos para a produção de eletricidade, a EDP Produção fez estudos onde verificou que a realização de reforços de potência em aproveitamento já existentes, seria uma forma economicamente bastante atrativa e ao mesmo tempo responderia às crescentes solicitações energéticas. É neste âmbito que se insere o Reforço de Potência em Venda Nova III. A empresa Metaloviana, com instalações fabris em Viana do Castelo, tem todo um historial e capacidade, reconhecida nacional e internacionalmente. Este, confere a certeza de ter acompanhado um trabalho de ponta, devidamente creditado e fundamentado numa qualidade e mérito, por demais reconhecido.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electricity Markets are not only a new reality but an evolving one as the involved players and rules change at a relatively high rate. Multi-agent simulation combined with Artificial Intelligence techniques may result in very helpful sophisticated tools. This paper presents a new methodology for the management of coalitions in electricity markets. This approach is tested using the multi-agent market simulator MASCEM (Multi-Agent Simulator of Competitive Electricity Markets), taking advantage of its ability to provide the means to model and simulate Virtual Power Players (VPP). VPPs are represented as coalitions of agents, with the capability of negotiating both in the market and internally, with their members in order to combine and manage their individual specific characteristics and goals, with the strategy and objectives of the VPP itself. A case study using real data from the Iberian Electricity Market is performed to validate and illustrate the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Management Information Systems 2000, p. 103-111

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The energy sector in industrialized countries has been restructured in the last years, with the purpose of decreasing electricity prices through the increase in competition, and facilitating the integration of distributed energy resources. However, the restructuring process increased the complexity in market players' interactions and generated emerging problems and new issues to be addressed. In order to provide players with competitive advantage in the market, decision support tools that facilitate the study and understanding of these markets become extremely useful. In this context arises MASCEM (Multi-Agent Simulator of Competitive Electricity Markets), a multi-agent based simulator that models real electricity markets. To reinforce MASCEM with the capability of recreating the electricity markets reality in the fullest possible extent, it is crucial to make it able to simulate as many market models and player types as possible. This paper presents a new negotiation model implemented in MASCEM based on the negotiation model used in day-ahead market (Elspot) of Nord Pool. This is a key module to study competitive electricity markets, as it presents well defined and distinct characteristics from the already implemented markets, and it is a reference electricity market in Europe (the one with the larger amount of traded power).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.