862 resultados para Daedalus Thematic Mapper
Resumo:
Thesis submitted to the Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Information Management – Geographic Information Systems
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1º e 2º Ciclo do Ensino Básico
Resumo:
Doutoramento em Ciências da Comunicação - Especialidade de Comunicação e Artes
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
A sociedade moderna encontra-se numa evolução progressiva e vertiginosa nomeadamente no que respeita às novas tecnologias. Independentemente da área de conhecimento, é de senso comum que cada vez mais é necessária uma formação sólida, sendo fundamental a preparação e a consolidação das futuras gerações na utilização das novas tecnologias. As plataformas de e-learning são hoje em dia uma realidade mais que afirmada e com aplicação em todos os setores de atividade. Na área da saúde, e mais concretamente no que diz respeito ao tema de Suporte Básico de Vida (SBV), foi-nos possível verificar que a maioria das pessoas revela falta de conhecimentos sobre o tema e, não obstante existirem cursos sobre este tema, a disponibilidade para a sua frequência nem sempre é possível pela distância, falta de tempo e disponibilidade para a sua frequência, bem como pelos custos envolvidos. O presente estudo pretende contribuir para uma análise e investigação sobre o potencial de utilização das novas Tecnologias Multimédia, aplicando-as ao ensino concreto de primeiros socorros e especialmente de Suporte Básico de Vida, num recurso educativo digital com o formato de Objeto de Aprendizagem. Para o efeito, será construído um Objeto de Aprendizagem temático, focalizando o tema de SBV. O repositório MERLOT foi utilizado como meio de distribuição global, no sentido de aferir o potencial e facilidade de distribuição e catalogação com metadados. Foi ainda colocado on-line o “SBVOA” (Objeto de Aprendizagem desenvolvido no presente estudo), numa página criada apenas para o efeito. Esta dissertação apresenta o estudo desenvolvido para investigação do potencial de aprendizagem proporcionado pelo Objeto de Aprendizagem desenvolvido, validando se pode ser uma alternativa interativa de educação em SBV, contribuindo deste modo para a diminuição de eventuais perdas de vida.
Resumo:
The use of Laptops and the Internet has produced the technological conditions for instructors and students can take advantage from the diversity of online information, communication, collaboration and sharing with others. The integration of Internet services in the teaching practices can be responsible for thematic, social and digital improvement for the agents involved. There are many benefits when we use a Learning Management Systems (LMS) such as Moodle, to support the lectures in higher education. We also will consider its implications for student support and online interaction, leading educational agents to a collaborating of different learning environments, where they can combine face-to-face instruction with computer-mediated instruction, blended-learning, and increases the possibilities for better quality and quantity of human communication in a learning background. In general components of learning management systems contain synchronous and asynchronous communication tools, management features, and assessment utilities. These assessment utilities allow lecturers to systematize basic assessment tasks. Assessments can be straightaway delivered to the student, and upon conclusion, immediately returned with grades and detailed feedback. Therefore learning management systems can also be used for assessment purposes in Higher Education.
Resumo:
RESUMO - O aumento da incidência das doenças crónicas representa um desafio enorme para todos os Sistemas de Saúde, pelo que a assistência de Saúde a doentes crónicos se tornou num problema das sociedades ocidentais. Os países mais pobres são os que mais sofrem, embora nos países desenvolvidos também se verifique um aumento notável das doenças crónicas. Estima-se que estas representem mais de 60% do total das doenças mundiais no ano 2020 (WHO, 2001). A adaptação dos actuais modelos de saúde aos doentes crónicos não atingiu os objectivos, o que conduziu a que, de há alguns anos a esta parte, se procure alternativas mais eficazes e eficientes. Uma das pressões do mercado que se fazem sentir será caracterizada por uma maior aposta na Promoção da Saúde e Prevenção da doença. O conceito de “Saúde” migrou de “não existência de doença” para “Bem-estar Físico e Psicológico”. Desta forma, o foco dos cuidados de saúde teve que ser adaptado, o que conduziu a uma situação em que o âmbito da prestação de cuidados de saúde é um contínuo de serviços que vai desde a promoção da saúde, medicina preventiva e medicina curativa aos cuidados continuados e cuidados paliativos. As tecnologias de informação e comunicação desempenharão um importante papel nesta tendência, permitindo estabelecer ligações contínuas entre os consumidores e prestadores de cuidados de saúde. Por outro lado, as potencialidades da Internet, das comunicações móveis, dispositivos portáteis e do instrumental electrónico, tornam-se evidentes no desenvolvimento de serviços de e-Saúde: para monitorização, seguimento e controlo dos doentes extra hospitalar - serviços estes centrados no doente. O objectivo geral do presente estudo consiste no desenho de um projecto de investigação para posterior avaliação da percepção do estado de saúde dos doentes seguidos na consulta de hipocoagulação do Hospital de Santa Marta. Devido à escassez de investigação na temática deste trabalho em Portugal, procedeu-se a um trabalho exploratório, descritivo, de carácter comparativo e enquadrado na abordagem quantitativa. O campo de análise consiste em comparar doentes que fazem anticoagulação oral, seguidos na consulta de cardiologia (consulta convencional), com os doentes seguidos no programa Airmed (através das comunicações móveis). 4 Para avaliação da percepção do estado de saúde foi utilizado o questionário SF-36.----ABSTRACT - The increasing incidence of chronic diseases represents an enormous challenge to the Health Systems and on cause of that, the Health Assistance to chronic patients became a concern of the Occidental society. The Countries with lower economical resources are the ones that suffers the most, but also the Developed countries have a noticeable increase of chronic diseases. It is estimated these will represent over 60% of total diseases world wide in 2020 (WHO,2001). The adaptation of the actual Health Models to chronic patients did not achieved it’s goals, what leaded to look for more effective and efficient alternatives. One of the more sensitive market pressure factor is to look for a better Health Promotion and Disease Prevention. The concept of “Health” merged from “Disease absence” to “Physic and Psychic Wellness”. In this way the Health Care focus had to be adapted, what drove to a status where the scope of the Health care is a continuum of services that goes from the Health Promotion, Preventive and Curative to Continued and Palliative Medical Care. The Information and Communication Technologies will play a crucial role in this trend, allowing to establish continued connections between patients and Health Care providers. In parallel the potential of the Internet, mobile communications, portable devices and electronic instruments became evident to deploy e-Health services: to monitor, follow-up and control of patients outside the Hospital. The overall objective of the present study is an Investigation Project Design to further evaluate the health status perception of the patients followed in the consultation for Hypocoagulation in the “Hospital de Santa Marta”. Due to lack of investigation in this thematic, in Portugal, this study is developed in an exploratory way, descriptive, comparative, within a scope of a quantified approach. The analysis field consists on comparing patients prescribed with oral anticoagulants and followed-up at the
Resumo:
INTED2010, the 4th International Technology, Education and Development Conference was held in Valencia (Spain), on March 8, 9 and 10, 2010.
Resumo:
Relatório de Estágio submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Produção.
Resumo:
Resumo: Este estudo surge no âmbito do Mestrado em Reabilitação na Especialidade de Deficiência Visual ministrado em conjunto pelas Faculdades de Ciências Médicas e Motricidade Humana. Na dissertação intitulada Formação de Professores para o Ensino da Matemática no Ensino Básico a Alunos com DV foi desenvolvida uma investigação, incidindo sobre a temática atrás referida. Na primeira parte apresenta-se uma revisão de literatura, onde se exploram vários conceitos relevantes como a deficiência visual, o currículo de matemática no Ensino Básico, as adaptações curriculares dos alunos com DV, as tecnologias de apoio e a Formação dos Professores, entre outros. Na segunda parte apresentam-se as fundamentações teóricas que subjazem à escolha da metodologia de investigação e instrumentação bem como a descrição dos procedimentos de investigação. Participaram neste estudo 52 professores de Matemática dos três ciclos do ensino básico, que tinham alunos cegos ou com baixa visão nas suas turmas. Os dados foram recolhidos através de um questionário aplicados aos professores. Na terceira parte apresentam-se alguns resultados desta pesquisa. Relativamente à Formação de Professores, sendo N= 52, no que diz respeito à Deficiência Visual, a maioria dos professores (58%) diz não ter conhecimento da mesma e 40% dos professores diz ter algum conhecimento sobre a deficiência visual. Destes professores, grande parte referiu não considerar suficiente a informação recebida, pelo que concluímos ser pertinente a proposta de cursos versando a formação de professores em Deficiência Visual. Conclui-se também que os professores sentem dificuldades com os materiais e equipamentos disponíveis embora refiram que tivessem tido alguma informação sobre os mesmos.----------------------------------------- ABSTRACT: This study is the subject of the Master Course on the Specialty of Visual impaired given by both the Faculties of Ciências Médicas and Motricidade Humana. In the dissertation, the title of Which is Vocational Training of Teachers for the Mathematies teaching in the Basic Compulsory education to students with Visual impaired inside of the thematic behind referred. In the first part it is presented a literature revision, where some concepts are explored like the visual impaired, the curriculum of mathematics in the Basic Compulsory Education, the curriculum adaptations to the pupils with visual impaired, the technologies of support and the vocational training of the teachers, among others. In the second part we can recognize the theoretical recitals that support the choice of the research methodology and the whole instrumens meedes as well as the description of the research procedures. 52 professors of Mathematics from the three cycles of basic compulsory education participated in this study, they had blind pupils or with low vision in their classes. The data had been collected through a questionnaire applied to the teachers. In the third part, we come to some results of this research. In what concerns the vocational training of the teachers being N= 52, the majority of them (58%) says not to have knowledge of it and 40% of them says that they have some knowledge on the visual impaired. Among, a great part mentioned not having received sufficient information, as we conclude to be pertinent the proposal of courses makina the vocational training of teachers in visual impaired a reality. One also concludes that the teachers have difficulties with the materials and available equipment even though they mention to have had some information about them.
Resumo:
Dissertação de Mestrado Apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Empreendedorismo e Internacionalização, sob orientação do Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira
Resumo:
Relatório de Estágio apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação do Senhor Dr. Paulo Jorge Seabra dos Anjos, Revisor Oficial de Contas, na sociedade António Anjos, F. Brandão & Associados, SROC, Lda. e da Exma. Senhora Professora Doutora Susana Adelina Moreira Carvalho Basto
Resumo:
Mestrado em Engenharia Geotécnica e Geoambiente
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.