844 resultados para Global Knowledge Base
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
In recent years emerged several initiatives promoted by educational organizations to adapt Service Oriented Architectures (SOA) to e-learning. These initiatives commonly named eLearning Frameworks share a common goal: to create flexible learning environments by integrating heterogeneous systems already available in many educational institutions. However, these frameworks were designed for integration of systems participating in business like processes rather than on complex pedagogical processes as those related to automatic evaluation. Consequently, their knowledge bases lack some fundamental components that are needed to model pedagogical processes. The objective of the research described in this paper is to study the applicability of eLearning frameworks for modelling a network of heterogeneous eLearning systems, using the automatic evaluation of programming exercises as a case study. The paper surveys the existing eLearning frameworks to justify the selection of the e-Framework. This framework is described in detail and identified the necessary components missing from its knowledge base, more precisely, a service genre, expression and usage model for an evaluation service. The extensibility of the framework is tested with the definition of this service. A concrete model for evaluation of programming exercises is presented as a validation of the proposed approach.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation presented to obtain a Master degree in Biotechnology
Resumo:
No presente relatório de estágio académico é mencionado o trabalho desenvolvido na empresa Civigest – Gestão de Projetos, Lda. por um período de seis meses, entre Dezembro de 2012 e Junho de 2013. O relatório que se apresenta é, antes de mais, o reflexo do produtivo período passado na empresa, na qual se executou diversificadas tarefas, atribuindo-se maior ênfase ao estudo e elaboração de projetos e fiscalização de empreitadas em todas as suas componentes. Procura-se pois abordar áreas de conhecimento que têm relacionamento direto com os projetos de engenharia civil. No âmbito do planeamento em empresas de projeto de engenharia, as previsões assumem-se como uma ferramenta incontornável para o cumprimento dos objetivos. A calendarização dos projetos e o esforço dos técnicos para o seu cumprimento são fundamentais, visto que as despesas fixas desta atividade prendem-se sobretudo com o custo de mão-de-obra. Incontornavelmente ligado ao departamento de estudos e projetos encontra-se o departamento de orçamentação, auxiliando os técnicos projetistas a optar por soluções que não comprometam os valores alvo da empreitada. O relatório de estágio descreve as atividades desenvolvidas nos vários departamentos da organização, com destaque para o departamento de fiscalização de empreitadas, que tem como principais objetivos a coordenação e gestão de obras envolvendo a elaboração e aprovação de autos de medição e autos de pagamento, a verificação do estrito cumprimento dos projetos, a promoção de alterações de melhoria de projeto, a verificação das especificações técnicas dos materiais e sua correta aplicação, bem como promover um bom relacionamento e diálogo entre as partes interessadas da empreitada. A passagem pelo departamento de fiscalização de empreitadas é fulcral para uma melhor perceção das dificuldades encontradas em obra. Por outro lado, o relatório aborda outra atividade relativa à análise de soluções construtivas, visando o desempenho térmico dos edifícios. Este é o conceito do relatório de estágio onde serão apresentados alguns casos de obras iniciadas na Civigest e posteriormente acompanhadas em obra. Relativamente a este ponto serão analisadas soluções de melhoria térmica. Com menor ênfase no trabalho, mas não menos importante, referir-se-á a profissão de coordenação de segurança em obra.
Resumo:
Context-aware recommendation of personalised tourism resources is possible because of personal mobile devices and powerful data filtering algorithms. The devices contribute with computing capabilities, on board sensors, ubiquitous Internet access and continuous user monitoring, whereas the filtering algorithms provide the ability to match the profile (interests and the context) of the tourist against a large knowledge bases of tourism resources. While, in terms of technology, personal mobile devices can gather user-related information, including the user context and access multiple data sources, the creation and maintenance of an updated knowledge base of tourism-related resources requires a collaborative approach due to the heterogeneity, volume and dynamic nature of the resources. The current PhD thesis aims to contribute to the solution of this problem by adopting a Crowdsourcing approach for the collaborative maintenance of the knowledge base of resources, Trust and Reputation for the validation of uploaded resources as well as publishers, Big Data for user profiling and context-aware filtering algorithms for the personalised recommendation of tourism resources.
Resumo:
A congestão nasal é o sintoma mais referido nas doenças inflamatórias e/ou infecciosas da mucosa nasal, sendo a rinite alérgica a sua causa mais frequente. Existindo deficiente informação epidemiológica sobre a problemática em discussão, o presente estudo avalia e caracteriza a prevalência da obstrução nasal na população adulta e
a situação actual quanto à etiologia e tratamento deste sintoma tão frequente na prática c1ínica.
Metodologia: 0 estudo foi realizado durante o primeiro trimestre de 2007, com base numa amostra representativa da população de
Portugal, de idade igual ou superior a 15 anos. Foi aplicado um questionário para identificação de sete sintomas ocorridos nas duas últimas semannas e ainda a identificação de três sintomas ocorridos
na última semana, sendo feitas avaliações funcionais, com medições do fluxo inspiratório máximo nasal (peak-flow nasal) numa sub-amostra da população. Foi criado um "índice de congestão nasal global", com base nas sete perguntas do questionário, através da transformação do indicador das respostas num índice.
Resultados: Dos 1037 inquiridos, cerca de 9,5% afirmaram ter dificuldades em trabalhar, aprender na escola ou fazer as suas actividades por causa dos sintomas nasais. Cerca de 2/3 da população
não apresentou congestão nasal (grupo A, 65,6%), 16,4% revelou queixas pouco significativas (grupo B), 13,3% apresentou congestão nasal ligeira a moderada e cerca de 4,6% apresentou congestão nasal grave. Cerca de 17,9% da população estudada tem queixas significativas de congestão nasal. Os indices de congestão nasal foram
significativamente mais elevados nas mulheres e nos indivíduos que referiram dificuldades em trabalhar/estudar/fazer alguma actividade devido aos sintomas nasais. Na análise dos três sintomas
ocorridos durante a última semana, os doentes com índices de congestão mais elevados apresentavam significativamente mais queixas de "acordar de manhã com nariz tapado ou obstruido" (p <0,0001)
"acordar de manhã com boca seca ou com sede" (p <0,0001) e de ressonar (p
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Doutor em Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
RESUMO - As Directivas Antecipadas de Vida, são instruções escritas ou orais formuladas por uma pessoa competente relativamente à prestação ou suspensão de cuidados médicos numa eventual situação de doença geradora de incapacidade para decidir ou expressar a sua vontade (Neves et al., 2010). No presente estudo, o problema de investigação centra-se em saber de que forma a construção de um modelo de Directivas Antecipadas de Vida pode contribuir para uma melhor gestão nas unidades de saúde? Para a realização do presente projecto de investigação, foi efectuada uma pesquisa bibliográfica sobre os principais conceitos e estado actual do conhecimento, os quais contribuíram para a definição dos objectivos de investigação empírica: Contribuir para a criação de um corpo de conhecimento no que diz respeito às Directivas Antecipadas de Vida com a elaboração de uma proposta de um modelo de aferição da sua aceitabilidade nas unidades de saúde em Portugal. Identificar os diferentes intervenientes, e qual o seu papel na implementação do modelo proposto. No que se refere ao estudo das Directivas Antecipadas de Vida em Portugal, e em especial nas unidades de saúde, recorreu-se a uma metodologia exploratória e descritiva, para identificar as principais características e trabalhos desenvolvidos na área em investigação. A primeira conclusão reside nas características da própria sociedade, a qual não parece ainda estar suficientemente desperta para a problemática em estudo, não obstante se verificar a existência de iniciativas legislativas recentemente apresentadas. Esta constatação verifica-se igualmente ao nível das unidades prestadoras de cuidados de saúde.