75 resultados para Well-Posed Problem
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e 2.º Ciclo
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Edificações
Resumo:
Trabalho de projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Introduction: University students are frequently exposed to events that can cause stress and anxiety, producing elevated cardiovascular responses. Repeated exposure to academic stress has implications to students’ success and well-being and may contribute to the development of long-term health problems. Objective: To identify stress levels and coping strategies in university students and assess the impact of stress experience in heart rate variability (HRV). Methods: 17 university students, 19-23 years, completed the University Students Stress Inventory, the Depression Anxiety Stress Scales and the Ways of Coping Questionnaire. Two 24h-Holter recordings were performed, on academic activity days, including one of them an exam situation. Results: Students tend to present moderate stress levels, and prefer problem-focused coping strategies in order to manage stress. Exam situations are perceived as significant stressors. Although we found no significant differences in HRV (SDNN), between days with and without an exam, we registered a lower SDNN score and a variation in heart rate (HR) related to exam situation (maximum HR peak at 10 minutes before the exam, and total HR recovery 20 minutes after the exam), reflecting sympathetic activation due to stress. Conclusions: These results suggest that academic events, especially those related to exam situations, are the cause of stress in university students, with implications at cardiovascular level, underlying the importance of interventions that help these students improve their coping skills and optimize stress management, in order to improve academic achievement and promote well-being and quality of life.
Resumo:
Introduction: Anxiety is a common problem in primary care and specialty medical settings. Treating an anxious patient takes more time and adds stress to staff. Unrecognised anxiety may lead to exam repetition, image artifacts and hinder the scan performance. Reducing patient anxiety at the onset is probably the most useful means of minimizing artifactual FDG uptake, both fat brown and skeletal muscle uptake, as well patient movement and claustrophobia. The aim of the study was to examine the effects of information giving on the anxiety levels of patients who are to undergo a PET/CT and whether the patient experience is enhanced with the creation of a guideline. Methodology: Two hundred and thirty two patients were given two questionnaires before and after the procedure to determine their prior knowledge, concerns, expectations and experiences about the study. Verbal information was given by one of the technologists after the completion of the first questionnaire. Results: Our results show that the main causes of anxiety in patients who are having a PET/CT is the fear of the procedure itself, and fear of the results. The patients who suffered from greater anxiety were those who were scanned during the initial stage of a disease. No significant differences were found between the anxiety levels pre procedural and post procedural. Findings with regard to satisfaction show us that the amount of information given before the procedure does not change the anxiety levels and therefore, does not influence patient satisfaction. Conclusions: The performance of a PET/CT scan is an important and statistically generator of anxiety. PET/CT patients are often poorly informed and present with a range of anxieties that may ultimately affect examination quality. The creation of a guideline may reduce the stress of not knowing what will happen, the anxiety created and may increase their satisfaction in the experience of having a PET/CT scan.
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Sandwich structures with soft cores are widely used in applications where a high bending stiffness is required without compromising the global weight of the structure, as well as in situations where good thermal and damping properties are important parameters to observe. As equivalent single layer approaches are not the more adequate to describe realistically the kinematics and the stresses distributions as well as the dynamic behaviour of this type of sandwiches, where shear deformations and the extensibility of the core can be very significant, layerwise models may provide better solutions. Additionally and in connection with this multilayer approach, the selection of different shear deformation theories according to the nature of the material that constitutes the core and the outer skins can predict more accurately the sandwich behaviour. In the present work the authors consider the use of different shear deformation theories to formulate different layerwise models, implemented through kriging-based finite elements. The viscoelastic material behaviour, associated to the sandwich core, is modelled using the complex approach and the dynamic problem is solved in the frequency domain. The outer elastic layers considered in this work may also be made from different nanocomposites. The performance of the models developed is illustrated through a set of test cases. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Basidiomycete strains synthesize several types of beta-D-glucans, which play a major role in the medicinal properties of mushrooms. Therefore, the specific quantification of these beta-D-glucans in mushroom strains is of great biochemical importance. Because published assay methods for these beta-D-glucans present some disadvantages, a novel colorimetric assay method for beta-D-glucan with alcian blue dye was developed. The complex formation was detected by following the decrease in absorbance in the range of 620 nm and by hypsochromic shift from 620 to 606 nm (similar to 14 nm) in UV-Vis spectrophotometer. Analysis of variance was used for optimization of the slope of the calibration curve by using the assay mixture containing 0.017% (w/v) alcian blue in 2% (v/v) acetic acid at pH 3.0. The high-throughput colorimetric assay method on microtiter plates was used for quantification of beta-D-glucans in the range of 0-0.8 mu g, with a slope of 44.15 x 10(-2) and a limit of detection of 0.017 mu g/well. Recovery experiments were carried out by using a sample of Hericium erinaceus, which exhibited a recovery of 95.8% for beta-1,3-D-glucan. The present assay method exhibited a 10-fold higher sensitivity and a 59-fold lower limit of detection compared with the published method with congo red beta-D-glucans of several mushrooms strains were isolated from fruiting bodies and mycelia, and they were quantified by this assay method. This assay method is fast, specific, simple, and it can be used to quantify beta-D-glucans from other biological sources. (C) 2015 American Institute of Chemical Engineers
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicações e Multimédia
Resumo:
In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.
Resumo:
This paper proposes an implementation, based on a multi-agent system, of a management system for automated negotiation of electricity allocation for charging electric vehicles (EVs) and simulates its performance. The widespread existence of charging infrastructures capable of autonomous operation is recognised as a major driver towards the mass adoption of EVs by mobility consumers. Eventually, conflicting requirements from both power grid and EV owners require automated middleman aggregator agents to intermediate all operations, for example, bidding and negotiation, between these parts. Multi-agent systems are designed to provide distributed, modular, coordinated and collaborative management systems; therefore, they seem suitable to address the management of such complex charging infrastructures. Our solution consists in the implementation of virtual agents to be integrated into the management software of a charging infrastructure. We start by modelling the multi-agent architecture using a federated, hierarchical layers setup and as well as the agents' behaviours and interactions. Each of these layers comprises several components, for example, data bases, decision-making and auction mechanisms. The implementation of multi-agent platform and auctions rules, and of models for battery dynamics, is also addressed. Four scenarios were predefined to assess the management system performance under real usage conditions, considering different types of profiles for EVs owners', different infrastructure configurations and usage and different loads on the utility grid (where real data from the concession holder of the Portuguese electricity transmission grid is used). Simulations carried with the four scenarios validate the performance of the modelled system while complying with all the requirements. Although all of these have been performed for one charging station alone, a multi-agent design may in the future be used for the higher level problem of distributing energy among charging stations. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil