944 resultados para cluster impact ratio
Resumo:
Smart grids with an intensive penetration of distributed energy resources will play an important role in future power system scenarios. The intermittent nature of renewable energy sources brings new challenges, requiring an efficient management of those sources. Additional storage resources can be beneficially used to address this problem; the massive use of electric vehicles, particularly of vehicle-to-grid (usually referred as gridable vehicles or V2G), becomes a very relevant issue. This paper addresses the impact of Electric Vehicles (EVs) in system operation costs and in power demand curve for a distribution network with large penetration of Distributed Generation (DG) units. An efficient management methodology for EVs charging and discharging is proposed, considering a multi-objective optimization problem. The main goals of the proposed methodology are: to minimize the system operation costs and to minimize the difference between the minimum and maximum system demand (leveling the power demand curve). The proposed methodology perform the day-ahead scheduling of distributed energy resources in a distribution network with high penetration of DG and a large number of electric vehicles. It is used a 32-bus distribution network in the case study section considering different scenarios of EVs penetration to analyze their impact in the network and in the other energy resources management.
Impact of design options in zero energy building conception: the case of large buildings in Portugal
Resumo:
The new recast of Directive 2010/31/EU in order to implement the new concept NZEB in new buildings, is to be fully respected by all Member States, and is revealed as important measure to promote the reduction of energy consumption of buildings and encouraging the use of renewable energy. In this study, it was tested the applicability of the nearly zero energy building concept to a big size office building and its impact after a 50-years life cycle span.
Resumo:
Fluoxetine is a selective serotonin reuptake inhibitor (SSRI) widely used in the treatment of major depression. It has been detected in surface and wastewaters, being able to negatively affect aquatic organisms. Most of the ecotoxicity studies focused only in pharmaceuticals, though excipients can also pose a risk to non-target organisms. In this work the ecotoxicity of five medicines (three generic formulations and two brand labels) containing the same active substance (fluoxetine hydrochloride) was tested on the alga Chlorella vulgaris, in order to evaluate if excipients can influence their ecotoxicity. Effective concentrations that cause 50% of inhibition (EC50) ranging from 0.25 to 15 mg L−1 were obtained in the growth inhibition test performed for the different medicines. The corresponding values for fluoxetine concentration are 10 times lower. Higher EC50 values had been published for the same alga considering only the toxicity of fluoxetine. Therefore, this increase in toxicity may be attributed to the presence of excipients. Thus more studies on ecotoxicological effects of excipients are required in order to assess the environmental risk they may pose to aquatic organisms.
Resumo:
Coal contains trace elements and naturally occurring radionuclides such as 40K, 232Th, 238U. When coal is burned, minerals, including most of the radionuclides, do not burn and concentrate in the ash several times in comparison with their content in coal. Usually, a small fraction of the fly ash produced (2-5%) is released into the atmosphere. The activities released depend on many factors (concentration in coal, ash content and inorganic matter of the coal, combustion temperature, ratio between bottom and fly ash, filtering system). Therefore, marked differences should be expected between the by-products produced and the amount of activity discharged (per unit of energy produced) from different coal-fired power plants. In fact, the effects of these releases on the environment due to ground deposition have been received some attention but the results from these studies are not unanimous and cannot be understood as a generic conclusion for all coal-fired power plants. In this study, the dispersion modelling of natural radionuclides was carried out to assess the impact of continuous atmospheric releases from a selected coal plant. The natural radioactivity of the coal and the fly ash were measured and the dispersion was modelled by a Gaussian plume estimating the activity concentration at different heights up to a distance of 20 km in several wind directions. External and internal doses (inhalation and ingestion) and the resulting risk were calculated for the population living within 20 km from the coal plant. In average, the effective dose is lower than the ICRP’s limit and the risk is lower than the U.S. EPA’s limit. Therefore, in this situation, the considered exposure does not pose any risk. However, when considering the dispersion in the prevailing wind direction, these values are significant due to an increase of 232Th and 226Ra concentrations in 75% and 44%, respectively.
Resumo:
Hand-off (or hand-over), the process where mobile nodes select the best access point available to transfer data, has been well studied in wireless networks. The performance of a hand-off process depends on the specific characteristics of the wireless links. In the case of low-power wireless networks, hand-off decisions must be carefully taken by considering the unique properties of inexpensive low-power radios. This paper addresses the design, implementation and evaluation of smart-HOP, a hand-off mechanism tailored for low-power wireless networks. This work has three main contributions. First, it formulates the hard hand-off process for low-power networks (such as typical wireless sensor networks - WSNs) with a probabilistic model, to investigate the impact of the most relevant channel parameters through an analytical approach. Second, it confirms the probabilistic model through simulation and further elaborates on the impact of several hand-off parameters. Third, it fine-tunes the most relevant hand-off parameters via an extended set of experiments, in a realistic experimental scenario. The evaluation shows that smart-HOP performs well in the transitional region while achieving more than 98 percent relative delivery ratio and hand-off delays in the order of a few tens of a milliseconds.
Resumo:
While Cluster-Tree network topologies look promising for WSN applications with timeliness and energy-efficiency requirements, we are yet to witness its adoption in commercial and academic solutions. One of the arguments that hinder the use of these topologies concerns the lack of flexibility in adapting to changes in the network, such as in traffic flows. This paper presents a solution to enable these networks with the ability to self-adapt their clusters’ duty-cycle and scheduling, to provide increased quality of service to multiple traffic flows. Importantly, our approach enables a network to change its cluster scheduling without requiring long inaccessibility times or the re-association of the nodes. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs without significant changes to the protocol. Finally, we analyze and demonstrate the validity of our methodology through a comprehensive simulation and experimental validation using commercially available technology on a Structural Health Monitoring application scenario.
Resumo:
Apresentação realizada no OH&S Forum 2011 - International Forum on Occupational Health and Safety: Policies, profiles and services, na Finlândia de, 20 a 22 Junho de 2011.
Resumo:
Poster apresentado na 8.ª Conferência da European Academy of Occupational Health Psychology, em Valencia, 12-14 de novembro de 2008.
Resumo:
Comunicação apresentada na 30th Sunbelt Social Networks Conference, em Riva del Garda, Itália, a 3 de Julho de 2010.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
The Online Mathematics Education Project (MatActiva) is an exciting new initiative which aims to support and enhance mathematics education. The project is led by the Institute of Accounting and Administration of Porto (ISCAP), part of the Polytechnic Institute of Porto (IPP). It provides innovative resources and carefully constructed materials around themes such as Elementary Mathematics, Calculus, Algebra, Statistics and Financial Mathematics to help support and inspire students and teachers of mathematics. The goal is to increase mathematical understanding, confidence and enjoyment, enrich the mathematical experience of each person, and promote creative and imaginative approaches to mathematics. Furthermore the project can be used to deliver engaging and effective mathematics instruction through the flipped classroom model. This paper also presents the findings of a large survey, whose propose was to study the student’s reaction to the project.
Resumo:
The anthropometric (body weight, height, upper arm circumference, triceps and subescapular skinfolds; Quetelet index and arm muscle circunference) and blood biochemistry (proteins and lipids) parameters were evaluated in 93 males and 27 females, 17-72 years old voluntaries living in the malarial endemic area of Humaita city (southwest Amazon). According to their malarial history they were assembled in four different groups: G1-controls without malarial history (n:30); G2 - controls with malarial history but without actual manifestation of the disease (n:40); G3 - patients with Plasmodium vivax (n:19) and G4 - patients with Plasmodium falciparum (n:31). The malarial status was stablished by clinical and laboratory findings. The overall data of anthropometry and blood biochemistry discriminated the groups differently. The anthropometric data were low sensitive and contrasted only the two extremes (G1>G4) whereas the biochemistry differentiated two big groups, the healthy (G1+G2) and the patients (G3+G4). The nutritional status of the P. falciparum patients was highly depressed for most of the studied indices but none was sensitive enough to differentiate this group from the P. vivax group (G3). On the other hand the two healthy groups could be differentiated through the levels of ceruloplasmin (G1
Resumo:
A thesis submitted to the University of Innsbruck for the doctor degree in Natural Sciences, Physics and New University of Lisbon for the doctor degree in Physics, Atomic and Molecular Physics
Resumo:
Artigo científico disponível actualmente em Early View (Online Version of Record published before inclusion in an issue)
Resumo:
In this work, cluster analysis is applied to a real dataset of biological features of several Portuguese reservoirs. All the statistical analysis is done using R statistical software. Several metrics and methods were explored, as well as the combination of Euclidean metric and the hierarchical Ward method. Although it did not present the best combination in terms of internal and stability validation, it was still a good solution and presented good results in terms of interpretation of the problem at hand.