89 resultados para Database, Image Retrieval, Browsing, Semantic Concept


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the implications of worker overestimation of productivity for firms in which incentives take the form of tournaments. Each worker overestimates his productivity but is aware of the bias in his opponent’s self-assessment. The manager of the firm, on the other hand, correctly assesses workers’ productivities and self-beliefs when setting tournament prizes. The paper shows that, under a variety of circumstances, firms make higher profits when workers have positive self-image than if workers do not. By contrast, workers’ welfare declines due to their own misguided choices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Química e Bioquímica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de Estágio apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Ciências da Comunicação, área de especialização em Comunicação Estratégica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis a semi-automated cell analysis system is described through image processing. To achieve this, an image processing algorithm was studied in order to segment cells in a semi-automatic way. The main goal of this analysis is to increase the performance of cell image segmentation process, without affecting the results in a significant way. Even though, a totally manual system has the ability of producing the best results, it has the disadvantage of taking too long and being repetitive, when a large number of images need to be processed. An active contour algorithm was tested in a sequence of images taken by a microscope. This algorithm, more commonly known as snakes, allowed the user to define an initial region in which the cell was incorporated. Then, the algorithm would run several times, making the initial region contours to converge to the cell boundaries. With the final contour, it was possible to extract region properties and produce statistical data. This data allowed to say that this algorithm produces similar results to a purely manual system but at a faster rate. On the other hand, it is slower than a purely automatic way but it allows the user to adjust the contour, making it more versatile and tolerant to image variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO: Apesar de toda a evolução farmacológica e de meios complementares de diagnóstico possível nos últimos anos, o enfarte agudo do miocárdio e a morte súbita continuam a ser a primeira manifestação da aterosclerose coronária para muitos doentes, que estavam previamente assintomáticos. Os exames complementares de diagnóstico tradicionalmente usados para avaliar a presença de doença coronária, baseiam‐se na documentação de isquémia do miocárdio e por este motivo a sua positividade depende da presença de lesões coronárias obstrutivas. As lesões coronárias não obstrutivas estão também frequentemente implicadas no desenvolvimento de eventos coronários. Apesar de o risco absoluto de instabilização por placa ser superior para as lesões mais volumosas e obstrutivas, estas são menos prevalentes do que as placas não obstrutivas e assim, por questões probabilísticas, os eventos coronários resultam com frequência da rotura ou erosão destas últimas. Estudos recentes de imagiologia intracoronária avançada forneceram evidência de que apesar de ser possível identificar algumas características de vulnerabilidade em placas associadas ao desenvolvimento subsequente de eventos coronários, a sua sensibilidade e especificidade é muito baixa para aplicação clínica. Mais do que o risco associado a uma placa em particular, para o doente poderá ser mais importante o risco global da sua árvore coronária reflexo da soma das probabilidade de todas as suas lesões, sendo que quanto maior for a carga aterosclerótica maior será o seu risco. A angio TC cardíaca é a mais recente técnica de imagem não invasiva para o estudo da doença coronária e surgiu nos últimos anos fruto de importantes avanços na tecnologia de TC multidetectores. Estes avanços, permitiram uma progressiva melhoria da resolução espacial e temporal, contribuindo para a melhoria da qualidade dos exames, bem como uma significativa redução da dose de radiação. A par desta evolução tecnológica, foi aumentando a experiência e gerada mais evidência científica, tornando a angio TC cardíaca cada vez mais robusta na avaliação da doença coronária e aumentando a sua aplicabilidade clínica. Mais recentemente apareceram vários trabalhos que validaram o seu valor prognóstico, assinalando a sua chegada à idade adulta. Para além de permitir excluir a presença de doença coronária e de identificar a presença de estenoses significativas, a angio TC cardíaca permite identificar a presença de lesões coronárias não obstrutivas, característica impar desta técnica como modalidade de imagem não invasiva. Ao permitir identificar a totalidade das lesões ateroscleróticas (obstrutivas e não obstrutivas), a 18 angio TC cardíaca poderá fornecer uma quantificação da carga aterosclerótica coronária total, podendo essa identificação ser útil na estratificação dos indivíduos em risco de eventos coronários. Neste trabalho foi possível identificar preditores demográficos e clínicos de uma elevada carga aterosclerótica coronária documentada pela angioTC cardíaca, embora o seu poder discriminativo tenha sido relativamente modesto, mesmo quando agrupados em scores clínicos. Entre os vários scores, o desempenho foi um pouco melhor para o score de risco cardiovascular Heartscore. Estas limitações espelham a dificuldade de prever apenas com base em variáveis clínicas, mesmo quando agrupadas em scores, a presença e extensão da doença coronária. Um dos factores de risco clássicos, a obesidade, parece ter uma relação paradoxal com a carga aterosclerótica, o que pode justificar algumas limitações da estimativa com base em scores clínicos. A diabetes mellitus, por outro lado, foi um dos preditores clínicos mais importantes, funcionando como modelo de doença coronária mais avançada, útil para avaliar o desempenho dos diferentes índices de carga aterosclerótica. Dada a elevada prevalência de placas ateroscleróticas identificáveis por angio TC na árvore coronária, torna-‐se importante desenvolver ferramentas que permitam quantificar a carga aterosclerótica e assim identificar os indivíduos que poderão eventualmente beneficiar de medidas de prevenção mais intensivas. Com este objectivo, foi desenvolvido um índice de carga aterosclerótica que reúne a informação global acerca da localização, do grau de estenose e do tipo de placa, obtida pela angio TC cardíaca, o CT--‐LeSc. Este score poderá vir a ser uma ferramenta útil para quantificação da carga aterosclerótica coronária, sendo de esperar que possa traduzir a informação prognóstica da angio TC cardíaca. Por fim, o conceito de árvore coronária vulnerável poderá ser mais importante do que o da placa vulnerável e a sua identificação pela angio TC cardíaca poderá ser importante numa estratégia de prevenção mais avançada. Esta poderá permitir personalizar as medidas de prevenção primária, doseando melhor a sua intensidade em função da carga aterosclerótica, podendo esta vir a constituir uma das mais importantes indicações da angio TC cardíaca no futuro.---------------- ABSTRACT Despite the significant advances made possible in recent years in the field of pharmacology and diagnostic tests, acute yocardial infarction and sudden cardiac death remain the first manifestation of coronary atherosclerosis in a significant proportion of patients, as many were previously asymptomatic. Traditionally, the diagnostic exams employed for the evaluation of possible coronary artery disease are based on the documentation of myocardial ischemia and, in this way, they are linked to the presence of obstructive coronary stenosis. Nonobstructive coronary lesions are also frequently involved in the development of coronary events. Although the absolute risk of becoming unstable per plaque is higher for more obstructive and higher burden plaques, these are much less frequent than nonobstructive lesions and therefore, in terms of probability for the patient, coronary events are often the result of rupture or erosion of the latter ones. Recent advanced intracoronary imaging studies provided evidence that although it is possible to identify some features of vulnerability in plaques associated with subsequente development of coronary events, the sensitivity and sensibility are very limited for clinical application. More important than the individual risk associated with a certain plaque, for the patient it might be more important the global risk of the total coronary tree, as reflected by the sum of the diferent probabilities of all the lesions, since the higher the coronary Atherosclerotic burden, the higher the risk for the patient. Cardiac CT or Coronary CT angiography is still a young modality. It is the most recente noninvasive imaging modality in the study of coronary artery disease and its development was possible due to important advances in multidetector CT technology. These allowed significant improvements in temporal and spatial resolution, leading to better image quality and also some impressive reductions in radiation dose. At the same time, the increasing experience with this technique lead to a growing body of scientific evidence, making cardiac CT a robust imaging tool for the evaluation of coronary artery disease and increased its clinical indications. More recently, several publications documented its prognostic value, marking the transition of cardiac CT to adulthood. Besides being able to exclude the presence of coronary artery disease and of obstructive lesions, Cardiac CT allows also the identification of nonobstructive lesions, making this a unique tool in the field of noninvasive imaging modalities. By evaluating both obstructive and nonobstructive lesions, cardiac CT can provide for the quantification of total coronary atherosclerotic burden, and this can be useful to stratify the risk of future coronary events. In the present work, it was possible to identify significant demographic and clinical predictors of a high coronary atherosclerotic burden as assessed by cardiac CT, but with modest odds ratios, even when the individual variables were gathered in clinical scores. Among these diferent clinical scores, the performance was better for the Heartscore, a cardiovascular risk score. This modest performance underline the limitations on predicting the presence and severity of coronary disease based only on clinical variables, even when optimized together in risk scores, One of the classical risk factors, obesity, had in fact a paradoxical relation with coronary atherosclerotic burden and might explain some of the limitations of the clinical models. On the opposite, diabetes mellitus was one of the strongest clinical predictors, and was considered to be a model of more advanced coronary disease, useful to evaluate the performance of diferent plaque burden scores. In face of the high prevalence of plaques that can be identified in the coronary tree of patients undergoing cardiac CT, it is of utmost importance to develop tools to quantify the total coronary atherosclerotic burden providing the identification of patients that could eventually benefit from more intensive preventive measures. This was the rational for the development of a coronary atherosclerotic burden score, reflecting the comprehensive information on localization, degree of stenosis and plaque composition provided by cardiac CT – the CT-LeSc. This score may become a useful tool to quantify total coronary atherosclerotic burden and is expected to convey the strong prognostic information of cardiac CT. Lastly, the concept of vulnerable coronary tree might become more important than the concept of the vulnerable plaque and his assessment by cardiac CT Might become important in a more advance primary prevention strategy. This Could lead to a more custom-made primary prevention, tailoring the intensity of preventive measures to the atherosclerotic burden and this might become one of the most important indications of cardiac CT In the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing use of information and communication technologies (ICT) in diverse professional and personal contexts calls for new knowledge, and a set of abilities, competences and attitudes, for an active and participative citizenship. In this context it is acknowledged that universities have an important role innovating in the educational use of digital media to promote an inclusive digital literacy. The educational potential of digital technologies and resources has been recognized by both researchers and practitioners. Multiple pedagogical models and research approaches have already contributed to put in evidence the importance of adapting instructional and learning practices and processes to concrete contexts and educational goals. Still, academic and scientific communities believe further investments in ICT research is needed in higher education. This study focuses on educational models that may contribute to support digital technology uses, where these can have cognitive and educational relevance when compared to analogical technologies. A teaching and learning model, centered in the active role of the students in the exploration, production, presentation and discussion of interactive multimedia materials, was developed and applied using the internet and exploring emergent semantic hypermedia formats. The research approach focused on the definition of design principles for developing class activities that were applied in three different iterations in undergraduate courses from two institutions, namely the University of Texas at Austin, USA and the University of Lisbon, Portugal. The analysis of this study made possible to evaluate the potential and efficacy of the model proposed and the authoring tool chosen in the support of metacognitive skills and attitudes related to information structuring and management, storytelling and communication, using computers and the internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar solutions to this proposal do exist, however, those solutions are scarce and limited. For this reason, the proposed solution is composed of a natural user interface that is intended to create a new concept on this field. The validation of this work, consisted on the implementation of a serious game prototype, which can be used as a source for learning (Portuguese) sign language. On this validation, it was first implemented a module responsible for recognizing sign language. This first stage, allowed the increase of interaction and the construction of an algorithm capable of accurately recognizing sign language. On a second stage of the validation, the proposal was studied so that the pros and cons can be determined and considered on future works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Instituto Politécnico de Lisboa (IPL) e Instituto Superior de Engenharia de Lisboa (ISEL)apoio concedido pela bolsa SPRH/PROTEC/67580/2010, que apoiou parcialmente este trabalho

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field Lab in Entrepreneurial Innovative Ventures