21 resultados para Probability Weight : Rank-dependent Utility
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
We calculate the equilibrium thermodynamic properties, percolation threshold, and cluster distribution functions for a model of associating colloids, which consists of hard spherical particles having on their surfaces three short-ranged attractive sites (sticky spots) of two different types, A and B. The thermodynamic properties are calculated using Wertheim's perturbation theory of associating fluids. This also allows us to find the onset of self-assembly, which can be quantified by the maxima of the specific heat at constant volume. The percolation threshold is derived, under the no-loop assumption, for the correlated bond model: In all cases it is two percolated phases that become identical at a critical point, when one exists. Finally, the cluster size distributions are calculated by mapping the model onto an effective model, characterized by a-state-dependent-functionality (f) over bar and unique bonding probability (p) over bar. The mapping is based on the asymptotic limit of the cluster distributions functions of the generic model and the effective parameters are defined through the requirement that the equilibrium cluster distributions of the true and effective models have the same number-averaged and weight-averaged sizes at all densities and temperatures. We also study the model numerically in the case where BB interactions are missing. In this limit, AB bonds either provide branching between A-chains (Y-junctions) if epsilon(AB)/epsilon(AA) is small, or drive the formation of a hyperbranched polymer if epsilon(AB)/epsilon(AA) is large. We find that the theoretical predictions describe quite accurately the numerical data, especially in the region where Y-junctions are present. There is fairly good agreement between theoretical and numerical results both for the thermodynamic (number of bonds and phase coexistence) and the connectivity properties of the model (cluster size distributions and percolation locus).
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular. Área de especialização: Ultrassonografia Cardiovascular.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This paper is on the problem of short-term hydro scheduling (STHS), particularly concerning head-dependent reservoirs under competitive environment. We propose a novel method, based on mixed-integer nonlinear programming (MINLP), for optimising power generation efficiency. This method considers hydroelectric power generation as a nonlinear function of water discharge and of the head. The main contribution of this paper is that discharge ramping constraints and start/stop of units are also considered, in order to obtain more realistic and feasible results. The proposed method has been applied successfully to solve two case studies based on Portuguese cascaded hydro systems, providing a higher profit at an acceptable computation time in comparison with classical optimisation methods based on mixed-integer linear programming (MILP).
Resumo:
This paper is on the problem of short-term hydro scheduling (STHS), particularly concerning a head-dependent hydro chain We propose a novel mixed-integer nonlinear programming (MINLP) approach, considering hydroelectric power generation as a nonlinear function of water discharge and of the head. As a new contribution to eat her studies, we model the on-off behavior of the hydro plants using integer variables, in order to avoid water discharges at forbidden areas Thus, an enhanced STHS is provided due to the more realistic modeling presented in this paper Our approach has been applied successfully to solve a test case based on one of the Portuguese cascaded hydro systems with a negligible computational time requirement.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
O objectivo pretendido alcançar é o de obter uma redução nos consumos de energia eléctrica do Município de Vila Franca de Xira, recorrendo à URE. Esta URE consiste num conjunto de acções e/ou medidas e/ou procedimentos e/ou equipamentos cuja aplicação tem como fim, o de potenciar uma utilização e uma gestão mais racional e rentável da energia - daí se podendo dizer, cada vez mais, que a mesma é um factor essencial de economia energética, logo, de redução de custos. A aplicação destas acções e/ou medidas e/ou procedimentos e/ou equipamentos deverá, por isso, ser extensível aos Municípios, com o fim de fazer reduzir uma factura energética de grande peso e significado, a qual, em tempos que são de profunda crise económica, pode tornar problemático ou até mesmo hipotecar, o respectivo futuro. Assim e se por um lado existe no Município um conjunto alargado de situações estabelecidas sem critérios de racionabilidade energética e ás quais, todavia, é já possível fazer aplicar essas acções e/ou medidas e/ou procedimentos e/ou equipamentos de URE, por outro lado, é daí garantido que se poderá alcançar a pretendida redução dos consumos energéticos do Município, sempre assegurando e mantendo o conforto e a produtividade das actividades dependentes dessa energia. Com esse objectivo e partindo da análise de um conjunto de instalações e/ou equipamentos já existentes ou com possibilidades de virem a ser estabelecidas/os pelo Município, é pretendido definir, estudar e classificar, um conjunto de dados que tendam a desenvolver, potenciar e justificar a decisão da sua aplicação. Um conjunto alargado de exemplos práticos ou Casos de Estudo serão desenvolvidos, tentando chegar a conclusões sobre a viabilidade económica das soluções apresentadas e, daí, da decisão da oportunidade da execução do conjunto de trabalhos inerentes à aplicação desse tipo de situação.
Resumo:
This paper is on the problem of short-term hydro, scheduling, particularly concerning head-dependent cascaded hydro systems. We propose a novel mixed-integer quadratic programming approach, considering not only head-dependency, but also discontinuous operating regions and discharge ramping constraints. Thus, an enhanced short-term hydro scheduling is provided due to the more realistic modeling presented in this paper. Numerical results from two case studies, based on Portuguese cascaded hydro systems, illustrate the proficiency of the proposed approach.
Resumo:
A series of large area single layers and heterojunction cells in the assembly glass/ZnO:Al/p (SixC1-x:H)/i (Si:H)/n (SixC1-x:H)/Al (0
Resumo:
A series of large area single layers and glass/ZnO:AVp(SixC1-x:H)/i(Si:H)/n(SixC1-x:H)/AI (0 < x < 1) heterojunction cells were produced by plasma-enhanced chemical vapour deposition (PE-CVD) at low temperature. Junction properties, carrier transport and photogeneration are investigated from dark and illuminated current-voltage (J-V) and capacitance-voltage (C-V) characteristics. For the heterojunction cells atypical J-V characteristics under different illumination conditions are observed leading to poor fill factors. High series resistances around 106 Q are also measured. These experimental results were used as a basis for the numerical simulation of the energy band diagram, and the electrical field distribution of the structures. Further comparison with the sensor performance gave satisfactory agreement. Results show that the conduction band offset is the most limiting parameter for the optimal collection of the photogenerated carriers. As the optical gap increases and the conductivity of the doped layers decreases, the transport mechanism changes from a drift to a diffusion-limited process.
Resumo:
We study a model consisting of particles with dissimilar bonding sites ("patches"), which exhibits self-assembly into chains connected by Y-junctions, and investigate its phase behaviour by both simulations and theory. We show that, as the energy cost epsilon(j) of forming Y-junctions increases, the extent of the liquid-vapour coexistence region at lower temperatures and densities is reduced. The phase diagram thus acquires a characteristic "pinched" shape in which the liquid branch density decreases as the temperature is lowered. To our knowledge, this is the first model in which the predicted topological phase transition between a fluid composed of short chains and a fluid rich in Y-junctions is actually observed. Above a certain threshold for epsilon(j), condensation ceases to exist because the entropy gain of forming Y-junctions can no longer offset their energy cost. We also show that the properties of these phase diagrams can be understood in terms of a temperature-dependent effective valence of the patchy particles. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3605703]
Resumo:
Objective: To assess different factors influencing adiponectinemia in obese and normal-weight women; to identify factors associated with the variation (Δ) in adiponectinemia in obese women following a 6-month weight loss program, according to surgical/non-surgical interventions. Methods: We studied 100 normal-weight women and 112 obese premenopausal women; none of them was on any medical treatment. Women were characterized for anthropometrics, daily macronutrient intake, smoking status, contraceptives use, adiponectin as well as IL-6 and TNF-α serum concentrations. Results: Adiponectinemia was lower in obese women (p < 0.001), revealing an inverse association with waist-to-hip ratio (p < 0.001; r = –0.335). Normal-weight women presented lower adiponectinemia among smokers (p = 0.041); body fat, waist-to-hip ratio, TNF-α levels, carbohydrate intake, and smoking all influence adiponectinemia (r 2 = 0.436). After weight loss interventions, a significant modification in macronutrient intake occurs followed by anthropometrics decrease (chiefly after bariatric procedures) and adiponectinemia increase (similar after surgical and non-surgical interventions). After bariatric intervention, Δ adiponectinemia was inversely correlated to Δ waist circumference and Δ carbohydrate intake (r 2 = 0.706). Conclusion: Anthropometrics, diet, smoking, and TNF-α levels all influence adiponectinemia in normal-weight women, although explaining less than 50% of it. In obese women, anthropometrics modestly explain adiponectinemia. Opposite to non-surgical interventions, after bariatric surgery adiponectinemia increase is largely explained by diet composition and anthropometric changes.
Resumo:
Objective - To evaluate the effect of prepregnancy body mass index (BMI), energy and macronutrient intakes during pregnancy, and gestational weight gain (GWG) on the body composition of full-term appropriate-for-gestational age neonates. Study Design - This is a cross-sectional study of a systematically recruited convenience sample of mother-infant pairs. Food intake during pregnancy was assessed by food frequency questionnaire and its nutritional value by the Food Processor Plus (ESHA Research Inc, Salem, OR). Neonatal body composition was assessed both by anthropometry and air displacement plethysmography. Explanatory models for neonatal body composition were tested by multiple linear regression analysis. Results - A total of 100 mother-infant pairs were included. Prepregnancy overweight was positively associated with offspring weight, weight/length, BMI, and fat-free mass in the whole sample; in males, it was also positively associated with midarm circumference, ponderal index, and fat mass. Higher energy intake from carbohydrate was positively associated with midarm circumference and weight/length in the whole sample. Higher GWG was positively associated with weight, length, and midarm circumference in females. Conclusion - Positive adjusted associations were found between both prepregnancy BMI and energy intake from carbohydrate and offspring body size in the whole sample. Positive adjusted associations were also found between prepregnancy overweight and adiposity in males, and between GWG and body size in females.
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.