28 resultados para Automatic segmentation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in the fufillment of the requirements for the Degree of Master in Biomedical Engineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation presented for obtaining the Master’s Degree in Electrical Engineering and Computer Science, at Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proceedings of the 16th Annual Conference organized by the Insurance Law Association of Serbia and German Foundation for International Legal Co-Operation (IRZ), entitled "Insurance law, governance and transparency: basics of the legal certainty" Palic Serbia, 17-19 April 2015.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Towards a holistic perspective of CRM, this project aims to diagnose and propose a strategy and market segmentation for Siemens Healthcare. The main underlying principle is to apply a full customer-centric outlook taking own business properties into consideration while preserving Siemens Healthcare’s culture and vision. Mainly focused on market segmentation, this project goes beyond established boundaries by employing an unbiased perspective of CRM while challenging current strategy, goals, processes, tools, initiatives and KPIs. In order to promote a sustainable business excellence strategy, this project aspires to streamline CRM strategic importance and driving the company one step forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sample of 445 consumers resident in distinct Lisbon areas was analyzed through direct observations in order to discover each lifestyle’s current proportion, applying the Whitaker Lifestyle™ Method. The findings of the conducted hypothesis tests on the population proportion unveil that Neo-Traditional and Modern Whitaker lifestyles have the significantly highest proportion, while the overall presence of different lifestyles varies across neighborhoods. The research further demonstrates the validity of Whitaker observation techniques, media consumption differences among lifestyles and the importance of style and aesthetics while segmenting consumers by lifestyles. Finally, market opportunities are provided for firms operating in Lisbon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cataract surgery, the eye’s natural lens is removed because it has gone opaque and doesn’t allow clear vision any longer. To maintain the eye’s optical power, a new artificial lens must be inserted. Called Intraocular Lens (IOL), it needs to be modelled in order to have the correct refractive power to substitute the natural lens. Calculating the refractive power of this substitution lens requires precise anterior eye chamber measurements. An interferometry equipment, the AC Master from Zeiss Meditec, AG, was in use for half a year to perform these measurements. A Low Coherence Interferometry (LCI) measurement beam is aligned with the eye’s optical axis, for precise measurements of anterior eye chamber distances. The eye follows a fixation target in order to make the visual axis align with the optical axis. Performance problems occurred, however, at this step. Therefore, there was a necessity to develop a new procedure that ensures better alignment between the eye’s visual and optical axes, allowing a more user friendly and versatile procedure, and eventually automatizing the whole process. With this instrument, the alignment between the eye’s optical and visual axes is detected when Purkinje reflections I and III are overlapped, as the eye follows a fixation target. In this project, image analysis is used to detect these Purkinje reflections’ positions, eventually automatically detecting when they overlap. Automatic detection of the third Purkinje reflection of an eye following a fixation target is possible with some restrictions. Each pair of detected third Purkinje reflections is used in automatically calculating an acceptable starting position for the fixation target, required for precise measurements of anterior eye chamber distances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation consists of three essays on the labour market impact of firing and training costs. The modelling framework resorts to the search and matching literature. The first chapter introduces firing costs, both liner and non-linear, in a new Keynesian model, analysing business cycle effects for different wage rigidity degrees. The second chapter adds training costs in a model of a segmented labour market, accessing the interaction between these two features and the skill composition of the labour force. Finally, the third chapter analyses empirically some of the issues raised in the second chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.