74 resultados para Performance capability
Resumo:
Resumo I (Prática Pedagógica) - O Relatório de estágio foi concebido no âmbito da Unidade Curricular de Estágio do Ensino Especializado, Mestrado em Ensino da Música pela Escola Superior de Música de Lisboa. Assim, este documento assenta sobre a prática pedagógica desenvolvida no Conservatório de Música David de Sousa – Polo Pombal no ano letivo 2014-2015, abrangendo três alunos de diferentes graus de ensino. Neste Relatório será caracterizado o estabelecimento de ensino onde decorreu o estágio, assim como o desempenho que cada aluno teve durante o ano letivo, salientando os aspetos de competência motora, auditiva e expressiva. Este trabalho consistiu na avaliação do meu desempenho enquanto docente de trompete, permitindo-me refletir sobre os pontos bons e menos bons do meu trabalho, para que no futuro me seja possível atingir um nível mais elevado na minha atividade docente.
Resumo:
This research aims at analysing the mechanical performance of concrete with recycled aggregates (RA) from construction and demolition waste (CDW) from various locations in Portugal. First the characteristics of the various aggregates (natural and recycled) used in the production of concrete were thoroughly analysed. The composition of the RA was determined and several physical and chemical tests of the aggregates were performed. In order to evaluate the mechanical performance of concrete, compressive strength (in cubes and cylinders), splitting tensile strength, modulus of elasticity and abrasion resistance tests were performed. Concrete mixes with RA from CDW from several recycling plants were evaluated, in order to understand the influence that the RA's collection point, and consequently their composition, has on the characteristics of the mixes produced. The analysis of the mechanical performance allowed concluding that the use of RA worsens most of the properties tested, especially when fine RA are used. On the other hand, there was an increase in abrasion resistance when coarse RA were used. In global terms, the use of this type of aggregates, in limited contents, is viable from a mechanical viewpoint. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties.
Resumo:
This paper deals with a hierarchical structure composed by an event-based supervisor in a higher level and two distinct proportional integral (PI) controllers in a lower level. The controllers are applied to a variable speed wind energy conversion system with doubly-fed induction generator, namely, the fuzzy PI control and the fractional-order PI control. The event-based supervisor analyses the operation state of the wind energy conversion system among four possible operational states: park, start-up, generating or brake and sends the operation state to the controllers in the lower level. In start-up state, the controllers only act on electric torque while pitch angle is equal to zero. In generating state, the controllers must act on the pitch angle of the blades in order to maintain the electric power around the nominal value, thus ensuring that the safety conditions required for integration in the electric grid are met. Comparisons between fuzzy PI and fractional-order PI pitch controllers applied to a wind turbine benchmark model are given and simulation results by Matlab/Simulink are shown. From the results regarding the closed loop point of view, fuzzy PI controller allows a smoother response at the expense of larger number of variations of the pitch angle, implying frequent switches between operational states. On the other hand fractional-order PI controller allows an oscillatory response with less control effort, reducing switches between operational states. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
This paper describes the implementation of a distributed model predictive approach for automatic generation control. Performance results are discussed by comparing classical techniques (based on integral control) with model predictive control solutions (centralized and distributed) for different operational scenarios with two interconnected networks. These scenarios include variable load levels (ranging from a small to a large unbalance generated power to power consumption ratio) and simultaneously variable distance between the interconnected networks systems. For the two networks the paper also examines the impact of load variation in an island context (a network isolated from each other).
Resumo:
The aim of this study is to evaluate lighting conditions and speleologists’ visual performance using optical filters when exposed to the lighting conditions of cave environments. A crosssectional study was conducted. Twenty-three speleologists were submitted to an evaluation of visual function in a clinical lab. An examination of visual acuity, contrast sensitivity, stereoacuity and flashlight illuminance levels was also performed in 16 of the 23 speleologists at two caves deprived of natural lightning. Two organic filters (450 nm and 550 nm) were used to compare visual function with and without filters. The mean age of the speleologists was 40.65 (± 10.93) years. We detected 26.1% participants with visual impairment of which refractive error (17.4%) was the major cause. In the cave environment the majority of the speleologists used a head flashlight with a mean illuminance of 451.0 ± 305.7 lux. Binocular visual acuity (BVA) was -0.05 ± 0.15 LogMAR (20/18). BVA for distance without filter was not statistically different from BVA with 550 nm or 450 nm filters (p = 0.093). Significant improved contrast sensitivity was observed with 450 nm filters for 6 cpd (p = 0.034) and 18 cpd (p = 0.026) spatial frequencies. There were no signs and symptoms of visual pathologies related to cave exposure. Illuminance levels were adequate to the majority of the activities performed. The enhancement in contrast sensitivity with filters could potentially improve tasks related with the activities performed in the cave.
Resumo:
Relatório de Estágio submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Artes Performativas – especialização em Teatro-‐Música.
Resumo:
The basic objective of this work is to evaluate the durability of self-compacting concrete (SCC) produced in binary and ternary mixes using fly ash (FA) and limestone filler (LF) as partial replacement of cement. The main characteristics that set SCC apart from conventional concrete (fundamentally its fresh state behaviour) essentially depend on the greater or lesser content of various constituents, namely: greater mortar volume (more ultrafine material in the form of cement and mineral additions); proper control of the maximum size of the coarse aggregate; use of admixtures such as superplasticizers. Significant amounts of mineral additions are thus incorporated to partially replace cement, in order to improve the workability of the concrete. These mineral additions necessarily affect the concrete’s microstructure and its durability. Therefore, notwithstanding the many well-documented and acknowledged advantages of SCC, a better understanding its behaviour is still required, in particular when its composition includes significant amounts of mineral additions. An ambitious working plan was devised: first, the SCC’s microstructure was studied and characterized and afterwards the main transport and degradation mechanisms of the SCC produced were studied and characterized by means of SEM image analysis, chloride migration, electrical resistivity, and carbonation tests. It was then possible to draw conclusions about the SCC’s durability. The properties studied are strongly affected by the type and content of the additions. Also, the use of ternary mixes proved to be extremely favourable, confirming the expected beneficial effect of the synergy between LF and FA. © 2015 RILEM.
Resumo:
During last decades there has been a trend to build collaboration platforms as enablers for groups of enterprises to jointly provide integrated services and products. As a result, the notion of business ecosystem is getting wider acceptance. However, a critical issue that is still open, despite some efforts in this area, is the identification of adequate performance indicators to measure and motivate sustainable collaboration. This work-in-progress addresses this concern, briefly presenting the state of the art of relevant contributing areas such as, collaborative networks, business ecosystems, enterprise performance indicators, social networks analysis, and supply chains. Complementarily, through an assessment of current gaps, the research challenges are identified and an approach for further development is proposed.
Resumo:
Previous work by our group introduced a novel concept and sensor design for “off-the-person” ECG, for which evidence on how it compares against standard clinical-grade equipment has been largely missing. Our objectives with this work are to characterise the off-the-person approach in light of the current ECG systems landscape, and assess how the signals acquired using this simplified setup compare with clinical-grade recordings. Empirical tests have been performed with real-world data collected from a population of 38 control subjects, to analyze the correlation between both approaches. Results show off-the-person data to be correlated with clinical-grade data, demonstrating the viability of this approach to potentially extend preventive medicine practices by enabling the integration of ECG monitoring into multiple dimensions of people’s everyday lives. © 2015, IUPESM and Springer-Verlag Berlin Heidelberg.
Resumo:
The objective of this research is the production of concrete with recycled aggregates (RA) from various CDW plants around Portugal. The influence of the RA collection location and consequently of their composition on the characteristics of the concrete produced was analysed. In the mixes produced in this research RA from five plants (Valnor, Vimajas, Ambilei, Europontal and Retria) were used: in three of them coarse and fine RA were analysed and in the remaining ones only coarse RA were used. The experimental campaign comprised two tests in fresh concrete (cone of Abrams slump and density) and eight in hardened concrete (compressive strength in cubes and cylinders, splitting tensile strength, modulus of elasticity, water absorption by immersion and capillarity, carbonation and chloride penetration resistance). It was found that the use of RA causes a quality decrease in concrete. However, there was a wide results scatter according to the plant where the RAs were collected, because of the variation in composition of the RA. It was also found that the use of fine RA causes a more significant performance loss of the concrete properties analysed than the use of coarse RA. © (2015) Trans Tech Publications, Switzerland.
Resumo:
The capability to anticipate a contact with another device can greatly improve the performance and user satisfaction not only of mobile social network applications but of any other relying on some form of data harvesting or hoarding. One of the most promising approaches for contact prediction is to extrapolate from past experiences. This paper investigates the recurring contact patterns observed between groups of devices using an 8-year dataset of wireless access logs produced by more than 70000 devices. This effort permitted to model the probabilities of occurrence of a contact at a predefined date between groups of devices using a power law distribution that varies according to neighbourhood size and recurrence period. In the general case, the model can be used by applications that need to disseminate large datasets by groups of devices. As an example, the paper presents and evaluates an algorithm that provides daily contact predictions, based on the history of past pairwise contacts and their duration. Copyright © 2015 ICST.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.