968 resultados para resource dependence theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since 1989, five parliamentary elections have been the stage for the foundation and demise of political parties aspiring to govern the new democratic Polish state. The demise of the AWS before the 2001 elections after ten years of attempts to create a centre-right core party resulted in a new splintering of the right-wing, and the centre-right became again devoid of a pivotal formation. While Eurosceptic parties in average gain 8 percent of the vote, in the 2001 Polish parliamentary elections Eurosceptic parties gained around 20 percent of the vote. In Poland right-wing parties show an unusual propensity for Euroscepticism. The persistence and increased importance of nationalism in Poland, which has prevented the development of a strong Christian democratic party, effectively explains the levels of Euroscepticism on the right. After the autumn 2005 parliamentary elections the national conservative party, Law and Justice, formed a governing coalition with the national Catholic League of Polish Families, creating one of the first Eurosceptic governments. Although this work does not intend to provide a theorisation of party systems development, it shows that the context of European integration fostered nationalists’ divisiveness of, and provoked the splitting of the right the unusual propensity of parties for Euroscepticism makes Poland a paradigmatic case of the kind of conflicts over European integration emerging in Central and Eastern European party systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As organizações são entidades de natureza sistémica, composta, na sua maioria por várias pessoas que interagindo entre si, se propõem atingir objetivos comuns. Têm, frequentemente, de responder a alterações da envolvente externa através de processos de mudança organizacional, sendo fundamentalmente adaptativas, pois, para sobreviver, precisam de se reajustar continuamente às condições mutáveis do meio. O sucesso das organizações depende da sua capacidade de interação com o meio envolvente, ou seja, da sua capacidade de inovar e operar local ou globalmente, criando novas oportunidades de negócio que importa aproveitar. As tecnologias e os sistemas de informação e a forma como são utilizadas são fatores determinantes nesses processos de evolução e mudança. É necessário que a estratégia de TI esteja alinhada com os objetivos de negócio e que a sua utilização contribua para aumentos de produtividade e de eficiência no seu desempenho. Este trabalho descreve a análise, conceção, seleção e implementação de um Sistema de Informação na Portgás, S.A. baseado de um ERP - Enterprise Resource Planning, capaz de suportar a mudança organizacional e melhorar o desempenho global da organização. Promovendo numa primeira fase um crescimento exponencial do negócio e, de seguida, a adaptação da organização ao mercado concorrencial. O caso descreve o trabalho realizado pelo candidato e por equipas internas e externas, de levantamentos de requisitos gerais, técnicos e funcionais, desenvolvimento de um caderno de encargos, seleção, implementação e exploração de um ERP SAP. A apresentação e discussão do caso são enquadradas numa revisão de literatura sobre o papel das TI nos processos de mudança organizativa, alinhamento estratégico e vantagem competitiva das TI, contributo das TI para o aumento da produtividade, processos adoção e difusão das TI, fatores críticos de sucesso e BPM –Business Process Management

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The energy resource scheduling is becoming increasingly important, as the use of distributed resources is intensified and massive gridable vehicle (V2G) use is envisaged. This paper presents a methodology for day-ahead energy resource scheduling for smart grids considering the intensive use of distributed generation and V2G. The main focus is the comparison of different EV management approaches in the day-ahead energy resources management, namely uncontrolled charging, smart charging, V2G and Demand Response (DR) programs i n the V2G approach. Three different DR programs are designed and tested (trip reduce, shifting reduce and reduce+shifting). Othe r important contribution of the paper is the comparison between deterministic and computational intelligence techniques to reduce the execution time. The proposed scheduling is solved with a modified particle swarm optimization. Mixed integer non-linear programming is also used for comparison purposes. Full ac power flow calculation is included to allow taking into account the network constraints. A case study with a 33-bus distribution network and 2000 V2G resources is used to illustrate the performance of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The massification of electric vehicles (EVs) can have a significant impact on the power system, requiring a new approach for the energy resource management. The energy resource management has the objective to obtain the optimal scheduling of the available resources considering distributed generators, storage units, demand response and EVs. The large number of resources causes more complexity in the energy resource management, taking several hours to reach the optimal solution which requires a quick solution for the next day. Therefore, it is necessary to use adequate optimization techniques to determine the best solution in a reasonable amount of time. This paper presents a hybrid artificial intelligence technique to solve a complex energy resource management problem with a large number of resources, including EVs, connected to the electric network. The hybrid approach combines simulated annealing (SA) and ant colony optimization (ACO) techniques. The case study concerns different EVs penetration levels. Comparisons with a previous SA approach and a deterministic technique are also presented. For 2000 EVs scenario, the proposed hybrid approach found a solution better than the previous SA version, resulting in a cost reduction of 1.94%. For this scenario, the proposed approach is approximately 94 times faster than the deterministic approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO - Assiste-se a um crescimento exponencial das despesas em saúde, quer na Europa como nos Estados Unidos. Em Portugal, os gastos totais com a saúde ascenderam a 10,2% do PIB, em 2006, contra os 8,8% registados no início da década anterior. É importante perceber o que motiva este crescimento quer em termos globais, quer no que diz respeito ao consumo de recursos, bem como até em termos da despesa pública. Este projecto tem dois objectivos fundamentais: em primeiro lugar, contribuir para o estudo dos factores determinantes da procura de cuidados de saúde em Portugal e, consequentemente, determinar as elasticidades procura – preço para diferentes tipos de cuidados de saúde. Metodologia: Estudo observacional baseado na análise empírica de dados administrativos (claims) respeitantes à utilização dos cuidados de saúde por parte de 12.230 indivíduos detentores de um plano de seguro de saúde individual, numa seguradora privada em Portugal. As elasticidades procura – preço para os diferentes tipos de cuidados de saúde obtiveram-se utilizando as variações percentuais das quantidades dos diferentes cuidados de saúde, antes e depois da variação do preço pago pelo indivíduo, para cada tipo de cuidado de saúde. Resultados: De acordo com a teoria económica tradicional o aumento do preço a pagar reduz o consumo de cuidados de saúde, e a procura é elástica, ou seja, os valores da elasticidade procura – preço obtidos são superiores a 1, em valor absoluto, logo o aumento do preço levou a uma redução mais do que proporcional das quantidades procuradas. A procura de cuidados de saúde em ambulatório é mais sensível à variação do preço do que a procura de cuidados de internamento. ------- ABSTRACT - We are witnessing an exponential growth of health care expenditures around the world. In Portugal, the total expenditure on health amounted to 10.2% of GDP in 2006, against 8.8% at the beginning of previous decade. It is important to understand what motivates this growth both in overall terms, with respect to resource consumption, and even in terms of public spending. This study was designed two achieve two objectives: first, to contribute to the study of demand for health care and, more specifically, to analyze the effect of price changes on the utilization of health care services; and secondly, to estimate the demand elasticity for different types of heath care. Methodology: Observational study based on empirical analysis of administrative data (claims) from a private health insurance Company in Portugal. The sample used had information regarding 12.230 individuals. Demand elasticity for the different types of health care services was obtained by the quotient between the percentage changes in the quantity of health care services, before and after the change in the price paid by the corresponding percentage change in the price. Results: This study showed that, for all medical services, price increases were associated with reductions in the quantity of care consumed as predicted by neoclassical demand theory, and we are in the presence of an elastic demand. This means that price elasticity is greater than 1 in absolute value so the increase in the price led to a more than proportional reduction in the quantity demanded. Demand elasticity was more responsive to changes in the price of specialist and emergency care than to changes in the price of inpatient care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We generalize Wertheim's first order perturbation theory to account for the effect in the thermodynamics of the self-assembly of rings characterized by two energy scales. The theory is applied to a lattice model of patchy particles and tested against Monte Carlo simulations on a fcc lattice. These particles have 2 patches of type A and 10 patches of type B, which may form bonds AA or AB that decrease the energy by epsilon(AA) and by epsilon(AB) = r epsilon(AA), respectively. The angle theta between the 2 A-patches on each particle is fixed at 601, 90 degrees or 120 degrees. For values of r below 1/2 and above a threshold r(th)(theta) the models exhibit a phase diagram with two critical points. Both theory and simulation predict that rth increases when theta decreases. We show that the mechanism that prevents phase separation for models with decreasing values of theta is related to the formation of loops containing AB bonds. Moreover, we show that by including the free energy of B-rings ( loops containing one AB bond), the theory describes the trends observed in the simulation results, but that for the lowest values of theta, the theoretical description deteriorates due to the increasing number of loops containing more than one AB bond.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the structural chain-to-ring transition at low temperature in a gas of dipolar hard spheres (DRS). Due to the weakening of entropic contribution, ring formation becomes noticeable when the effective dipole-dipole magnetic interaction increases, It results in the redistribution of particles from usually observed flexible chains into flexible rings. The concentration (rho) of DI-IS plays a crucial part in this transition: at a very low rho only chains and rings are observed, whereas even a slight increase of the volume fraction leads to the formation of branched or defect structures. As a result, the fraction of DHS aggregated in defect-free rings turns out to be a non-monotonic function of rho. The average ring size is found to be a slower increasing function of rho when compared Lo that of chains. Both theory and computer simulations confirm the dramatic influence of the ring formation on the rho-dependence of the initial magnetic susceptibility (chi) when the temperature decreases. The rings clue to their zero total dipole moment are irresponsive to a weak magnetic field and drive to the strong decrease of the initial magnetic susceptibility. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the liquid-vapor interface of a model of patchy colloids. This model consists of hard spheres decorated with short-ranged attractive sites ("patches") of different types on their surfaces. We focus on a one-component fluid with two patches of type A and nine patches of type B (2A9B colloids), which has been found to exhibit reentrant liquid-vapor coexistence curves and very low-density liquid phases. We have used the density-functional theory form of Wertheim's first-order perturbation theory of association, as implemented by Yu and Wu [J. Chem. Phys. 116, 7094 (2002)], to calculate the surface tension, and the density and degree of association profiles, at the liquid-vapor interface of our model. In reentrant systems, where AB bonds dominate, an unusual thickening of the interface is observed at low temperatures. Furthermore, the surface tension versus temperature curve reaches a maximum, in agreement with Bernardino and Telo da Gama's mesoscopic Landau-Safran theory [Phys. Rev. Lett. 109, 116103 (2012)]. If BB attractions are also present, competition between AB and BB bonds gradually restores the monotonic temperature dependence of the surface tension. Lastly, the interface is "hairy," i.e., it contains a region where the average chain length is close to that in the bulk liquid, but where the density is that of the vapor. Sufficiently strong BB attractions remove these features, and the system reverts to the behavior seen in atomic fluids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aggregation and management of Distributed Energy Resources (DERs) by an Virtual Power Players (VPP) is an important task in a smart grid context. The Energy Resource Management (ERM) of theses DERs can become a hard and complex optimization problem. The large integration of several DERs, including Electric Vehicles (EVs), may lead to a scenario in which the VPP needs several hours to have a solution for the ERM problem. This is the reason why it is necessary to use metaheuristic methodologies to come up with a good solution with a reasonable amount of time. The presented paper proposes a Simulated Annealing (SA) approach to determine the ERM considering an intensive use of DERs, mainly EVs. In this paper, the possibility to apply Demand Response (DR) programs to the EVs is considered. Moreover, a trip reduce DR program is implemented. The SA methodology is tested on a 32-bus distribution network with 2000 EVs, and the SA results are compared with a deterministic technique and particle swarm optimization results.