966 resultados para Pure


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Linear unmixing decomposes an hyperspectral image into a collection of re ectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces vertex component analysis, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO - A presente investigação procura descrever e compreender como a estratégia influencia a liderança e como esta por sua vez interage nos processos de inovação e mudança, em organizações de saúde. Desconhecem-se estudos anteriores, em Portugal, sobre este problema de investigação e da respectiva problemática teórica. Trata-se de um estudo exploratório e descritivo que envolveu 5 organizações de saúde, 4 portuguesas e 1 espanhola, 4 hospitais (dois privados e uma unidade local de saúde). Utilizou-se uma abordagem mista de investigação (qualitativa e quantitativa), que permitiu compreender, através do estudo de caso, como se articulam a estratégia, a liderança e a inovação nessas cinco organizações de saúde. Os resultados do estudo empírico foram provenientes da recolha de dados efectuada através de observação directa e estruturada, entrevistas com actores-chave, documentos em suporte de papel e digital, e ainda inquérito por questionário de auto-resposta a uma amostra (n=165) de actores do line e do staff (Administradores, Directores de Serviço/Departamento, Enfermeiros Chefe e Técnicos Coordenadores) das cinco organizações de saúde. Tanto o modelo de Miles & Snow (estratégia organizacional), como o modelo dos valores contrastantes de Quinn (cultura organizacional e liderança), devidamente adaptados, mostram-se heurísticos e provam poder aplicar-se às organizações de saúde, apesar a sua complexidade e especificidade. Tanto as organizações do sector público como do sector privado e organizações públicas concessionadas (parcerias público privadas) podem ser acompanhadas e monitorizadas nos seus processos de inovação e mudança, associados aos tipos de cultura, liderança ou estratégia organizacionais adoptadas. As organizações de saúde coabitam num continuum, onde o ambiente (quer interno quer externo) e o tempo são factores decisivos que condicionam a estratégia a adoptar. Também aqui, em função da realidade dinâmica e complexa onde a organização se move, não há tipologias puras. Há, sim, uma grande plasticidade e flexibilidade organizacionais. Quanto aos líderes, exercem habitualmente a autoridade formal, pela via da circular normativa. Não são pares (nem primi inter pares), colocam-se por vezes numa posição de superioridade, quando o mais adequado seria a relação de parceria, cooperação e procura de consensos, com todos os colaboradores, afim de serem eles os verdadeiros protagonistas e facilitadores da mudança e das inovações. Como factores facilitadores da inovação e da mudança, encontrámos nas organizações de saúde estudadas o seguinte: facilidade de aprender; visão/missão adequadas; ausência de medo de falhar; e como factores inibidores: falta de articulação entre serviços/departamentos; estrutura organizacional (no sector público muito verticalizada e no sector privado mais horizontalizada); resistência à mudança; falta de tempo; falha no tempo de reacção (o tempo útil para a tomada de decisão é, por vezes, ultrapassado). --------ABSTRACT - The present research seeks to describe and understand how strategy influences leadership and how this in turn interacts in the process of innovation and change in health organizations. Previous studies on these topics are unknown in Portugal, about this research problem and its theoretical problem. This is an exploratory and descriptive study that involved 5 health organizations, 4 Portuguese and 1 Spanish. We used a mixed approach of research (qualitative and quantitative), which enabled us to understand, through case study, how strategy and leadership were articulated with innovation in these five health organizations. The results of the empirical study came from data collection through direct observation, interviews with key actors, documents and survey questionnaire answered by 165 participants of line and staff (Administrators, Medical Directors of Service /Department, Head Nurses and Technical Coordinators) of the five health organizations. Despite their complexity and specificity, both the model of Miles & Snow (organizational strategy) and the model of the Competing Values Framework of Quinn (organizational culture and leadership), suitably adapted, have proven heuristic power and able to be apply to healthcare organizations. Both public sector organizations, private and public organizations licensed (public-private partnerships) can be tracked and monitored in their processes of innovation and change in order to understand its kind of culture, leadership or organizational strategy adopted. Health organizations coexist in a continuum, where the environment (internal and external) and time are key factors which determine the strategy to adopt. Here too depending on the dynamic and complex reality where the organization moves, there are no pure types. There is indeed a great organizational plasticity and flexibility. Leaders usually carry the formal authority by circular normative. They are not pairs (or primi inter pares). Instead they are, sometimes, in a position of superiority, when the best thing is partnership, collaboration, cooperation, building consensus and cooperation with all stakeholders, in order that they are the real protagonists and facilitators of change and innovation. As factors that facilitate innovation and change, we found in health organizations studied, the following: ease of learning; vision / mission appropriate; absence of fear of failure, and as inhibiting factors: lack of coordination between agencies / departments; organizational structure (in the public sector it is too vertical and in the private sector it is more horizontal); resistance to change; lack of time and failure in the reaction time (the time for decision making is sometimes exceeded).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A presente dissertação foi realizada em colaboração com o grupo empresarial Monteiro, Ribas, tendo como principal objectivo a realização de uma auditoria à gestão dos resíduos industriais produzidos pelas suas fábricas localizadas na Estrada da Circunvalação, no Porto. Para cumprir este objectivo, inicialmente foi efectuado um levantamento das obrigações legais relativas aos resíduos e foram procuradas práticas aconselhadas para a gestão interna. Para cada uma das fábricas, verificaram-se, quais os resíduos produzidos e analisaram-se os seus percursos, considerando as suas origens, os locais e modos de acondicionamento na origem, os modos de transporte interno, os locais e modos de armazenagem preliminar, e ainda, as quantidades produzidas, os transportadores, os operadores finais e as operações finais de gestão, sendo que estas quatro últimas informações são relativas ao ano 2013. De seguida procedeu-se à realização da auditoria nas diferentes unidades, verificando o cumprimento dos requisitos legais e das boas práticas em matéria de gestão de resíduos. As principais não conformidades detectadas, comuns às várias unidades fabris foram a inexistência de local/recipiente definido para acondicionamento de alguns resíduos, a falta ou insuficiente identificação de recipientes/zonas de acondicionamento, a inexistência de bacias de retenção para resíduos líquidos perigosos, o facto de no transporte interno apenas os resíduos perigosos serem cobertos e, os resíduos líquidos perigosos não serem transportados sobre bacias de retenção móveis nem com o material necessário para absorver derrames. Para cada resíduo e para cada unidade industrial foram propostas medidas correctivas e/ou de melhoria, quando aplicável. Relativamente à armazenagem preliminar, a principal inconformidade detectada foi o facto de todos os parques (quatro) possuírem resíduos perigosos no momento das auditorias, o que não é adequado. Foram propostas medidas correctivas e/ou de melhoria para cada parque. Como proposta global, tendo em conta factores económicos e de segurança, sugeriu-se que apenas o parque de resíduos perigosos possa armazenar este tipo de resíduos, pelo que os procedimentos de transporte interno devem ser melhorados, fazendo com que estes resíduos sejam transportados directamente para o parque de resíduos perigosos. Desta forma dois dos parques devem sofrer algumas remodelações, nomeadamente serem cobertos e fechados, ainda que não totalmente, e o parque de resíduos perigosos deve ser fechado, mantendo aberturas para ventilação, deve ser equipado com kit´s de contenção de derrames, fichas de segurança, procedimentos a realizar em caso de emergência, e ainda, devido ao facto do sistema de contenção de derrames ser pequeno face ao total de armazenamento, aconselha-se o uso de bacias de retenção para alguns dos recipientes de resíduos líquidos perigosos. Ao longo deste processo e em consequência da realização da auditoria, algumas situações consideradas não conformes foram sendo corrigidas. Também foram preparadas instruções de trabalho adequadas que serão posteriormente disponibilizadas. Foi ainda elaborada uma metodologia de avaliação de processos como base de trabalho para redução dos resíduos gerados. A etapa escolhida para a aplicação da mesma foi uma etapa auxiliar do processo produtivo da Monteiro, Ribas - Revestimentos, S.A - a limpeza de cubas com solventes, por forma a tentar minimizar os resíduos de solventes produzidos nesta operação. Uma vez que a fábrica já realiza a operação tendo em consideração medidas de prevenção e reutilização, a reciclagem é neste momento a única forma de tentar minimizar os resíduos de solventes. Foram então estudadas duas opções, nomeadamente a aquisição de um equipamento de regeneração de solventes e a contratação de uma operadora que proceda à regeneração dos resíduos de solventes e faça o retorno do solvente regenerado. A primeira opção poderá permitir uma redução de cerca de 95% na produção de resíduos de solventes e na aquisição de solvente puro, estimando-se uma poupança anual de cerca de **** €, com um período de recuperação do capital de cerca de 16 meses e a segunda pode conduzir a uma redução significativa na aquisição de solvente puro, cerca de 65%, e a uma poupança anual de cerca de **** €.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation presented at the Faculty of Sciences and Technology of the New University of Lisbon in fulfillment of the requirements for the Master degree in Conservation Science

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A thesis submitted to the University of Innsbruck for the doctor degree in Natural Sciences, Physics and New University of Lisbon for the doctor degree in Physics, Atomic and Molecular Physics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada para a obtenção do Grau de Doutor em Conservação e Restauro, especialidade Ciências da Conservação, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering by the Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following the deregulation experience of retail electricity markets in most countries, the majority of the new entrants of the liberalized retail market were pure REP (retail electricity providers). These entities were subject to financial risks because of the unexpected price variations, price spikes, volatile loads and the potential for market power exertion by GENCO (generation companies). A REP can manage the market risks by employing the DR (demand response) programs and using its' generation and storage assets at the distribution network to serve the customers. The proposed model suggests how a REP with light physical assets, such as DG (distributed generation) units and ESS (energy storage systems), can survive in a competitive retail market. The paper discusses the effective risk management strategies for the REPs to deal with the uncertainties of the DAM (day-ahead market) and how to hedge the financial losses in the market. A two-stage stochastic programming problem is formulated. It aims to establish the financial incentive-based DR programs and the optimal dispatch of the DG units and ESSs. The uncertainty of the forecasted day-ahead load demand and electricity price is also taken into account with a scenario-based approach. The principal advantage of this model for REPs is reducing the risk of financial losses in DAMs, and the main benefit for the whole system is market power mitigation by virtually increasing the price elasticity of demand and reducing the peak demand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brazil's nosologic profile has been sustaining profound modifications. Some occurred because of massive immunization campaigns and socioeconomic and demographic trends. Some yet were pure nosologic transitions, such as the emergence of AIDS. In this demand study it is described how these changes reflected on the 8,630 admissions of an Infectious Diseases Department in Niterói, along a thirty year period. Brazilian rural endemic diseases were infrequent (3.45%). Men predominated (62%) all the time, in all age strata and in nearly all diseases. Children under fifteen predominated until 1983. There was, in the case of tetanus, a striking rise in age strata. Institutional mortality dropped from 31% in 1965 to 10% in 1984, but rose since then to 15% in 1994. However, if AIDS patients had not been computed, mortality would have kept descending till 8% at the end of the study period. The crescent unimportance of immunopreventable diseases paralleled with the growing prominence of AIDS. In less than a decade, AIDS ranked fifth among the most frequent diseases in the whole period of thirty years. As opposed to the immunopreventable diseases, neither meningitides nor pneumonia appear to be in decline. AIDS, by its exponential incidence, by its chronic character, and by the uncountable opportunistic infections it determines, imposes itself as a challenge for the coming years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As estradas têm vindo a sofrer um aumento de importância, sendo necessário aplicar pavimentos com melhores características, assim foram desenvolvidos os mastiques betuminosos. Os mastiques são misturas de betume com filer, este material tem propriedades melhoradas em relação ao betume puro. O betume é um material viscoelástico, o que leva a que a viscosidade seja uma propriedade de vital importância estudar. A junção de fileres ao betume promove um aumento de viscosidade, levando a que esta propriedade reológica tenha ainda mais importância, principalmente porque a trabalhabilidade da mistura betuminosa fica comprometida quando a viscosidade não é a correta. Na realização desta dissertação foram realizados ensaios em mastiques betuminosos, com recurso ao viscosímetro rotativo de Brookfield. Os mastiques ensaiados são compostos com fileres de diferentes origens e com diferentes taxas de incorporação, e assim foram analisadas as diferenças obtidas nos valores da viscosidade dinâmica. Os resultados obtidos mostram que os mastiques, à temperatura de fabrico, têm um comportamento viscoso idêntico ao comportamento viscoso do betume puro. Apesar dos mastiques produzidos terem uma viscosidade maior que a do betume puro, o valor desta propriedade reológica tende a igualar ao valor do betume, este comportamento observa-se nas temperaturas mais elevadas. Quando se examinam os resultados dos mastiques produzidos com menor quantidade de filer, na generalidade, os valores da viscosidade obtidos são idênticos. Para taxas de incorporação maiores, os valores da viscosidade dinâmica dos mastiques produzidos são bastante mais altos e distintos. Levando a concluir que quanto maior a percentagem de filer no mastique, maior o valor da viscosidade e maior dispersão dos resultados obtidos, isto para os fileres e taxas de incorporação testadas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente, os sistemas de informação hospitalares têm de possibilitar uma utilização diferenciada pelos diferentes intervenientes, num cenário de constante adaptação e evolução. Para tal, é essencial a interoperabilidade entre os sistemas de informação do hospital e os diversos fornecedores de serviços, assim como dispositivos hospitalares. Apesar da necessidade de suportar uma heterogeneidade entre sistemas ser fundamental, o acesso/troca de informação deve ser feito de uma forma protocolada, segura e transparente. A infraestrutura de informação médica moderna consiste em muitos sistemas heterogéneos, com diversos mecanismos para controlar os dados subjacentes. Informações relativas a um único paciente podem estar dispersas por vários sistemas (ex: transferência de pacientes, readmissão, múltiplos tratamentos, etc.). Torna-se evidente a necessidade aceder a dados do paciente de forma consolidada a partir de diferentes locais. Desta forma, é fundamental utilizar uma arquitetura que promova a interoperabilidade entre sistemas. Para conseguir esta interoperabilidade, podem-se implementar camadas de “middleware” que façam a adaptação das trocas de informação entre os sistemas. Todavia, não resolvemos o problema subjacente, ou seja, a necessidade de utilização de um standard para garantir uma interacção fiável entre cliente/fornecedor. Para tal, é proposto uma solução que passa por um ESB dedicado para a área da saúde, denominada por HSB (Healthcare Service Bus). Entre as normas mais usuais nesta área devem-se salientar o HL7 e DICOM, esta última mais especificamente para dispositivos de imagem hospitalar, sendo a primeira utilizada para gestão e trocas de informação médica entre sistemas. O caso de estudo que serviu de base a esta dissertação é o de um hospital de média dimensão cujo sistema de informação começou por ser uma solução monolítica, de um só fornecedor. Com o passar dos anos, o fornecedor único desagregou-se em vários, independentes e concorrentes, dando lugar a um cenário extremamente preocupante em termos de manutenção e evolução futura do sistema de informação existente. Como resultado do trabalho efetuado, foi proposta uma arquitetura que permite a evolução do sistema atual de forma progressiva para um HSB puro.