977 resultados para Automatic generation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research Project submited as partial fulfilment for the Master Degree in Statistics and Information Management

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation presented for obtaining the Master’s Degree in Electrical Engineering and Computer Science, at Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Retinal ultra-wide field of view images (fundus images) provides the visu-alization of a large part of the retina though, artifacts may appear in those images. Eyelashes and eyelids often cover the clinical region of interest and worse, eye-lashes can be mistaken with arteries and/or veins when those images are put through automatic diagnosis or segmentation software creating, in those cases, the appearance of false positives results. Correcting this problem, the first step in the development of qualified auto-matic diseases diagnosis programs can be done and in that way the development of an objective tool to assess diseases eradicating the human error from those processes can also be achieved. In this work the development of a tool that automatically delimitates the clinical region of interest is proposed by retrieving features from the images that will be analyzed by an automatic classifier. This automatic classifier will evaluate the information and will decide which part of the image is of interest and which part contains artifacts. The results were validated by implementing a software in C# language and validated through a statistical analysis. From those results it was confirmed that the methodology presented is capable of detecting artifacts and selecting the clin-ical region of interest in fundus images of the retina.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A potentially renewable and sustainable source of energy is the chemical energy associated with solvation of salts. Mixing of two aqueous streams with different saline concentrations is spontaneous and releases energy. The global theoretically obtainable power from salinity gradient energy due to World’s rivers discharge into the oceans has been estimated to be within the range of 1.4-2.6 TW. Reverse electrodialysis (RED) is one of the emerging, membrane-based, technologies for harvesting the salinity gradient energy. A common RED stack is composed by alternately-arranged cation- and anion-exchange membranes, stacked between two electrodes. The compartments between the membranes are alternately fed with concentrated (e.g., sea water) and dilute (e.g., river water) saline solutions. Migration of the respective counter-ions through the membranes leads to ionic current between the electrodes, where an appropriate redox pair converts the chemical salinity gradient energy into electrical energy. Given the importance of the need for new sources of energy for power generation, the present study aims at better understanding and solving current challenges, associated with the RED stack design, fluid dynamics, ionic mass transfer and long-term RED stack performance with natural saline solutions as feedwaters. Chronopotentiometry was used to determinate diffusion boundary layer (DBL) thickness from diffusion relaxation data and the flow entrance effects on mass transfer were found to avail a power generation increase in RED stacks. Increasing the linear flow velocity also leads to a decrease of DBL thickness but on the cost of a higher pressure drop. Pressure drop inside RED stacks was successfully simulated by the developed mathematical model, in which contribution of several pressure drops, that until now have not been considered, was included. The effect of each pressure drop on the RED stack performance was identified and rationalized and guidelines for planning and/or optimization of RED stacks were derived. The design of new profiled membranes, with a chevron corrugation structure, was proposed using computational fluid dynamics (CFD) modeling. The performance of the suggested corrugation geometry was compared with the already existing ones, as well as with the use of conductive and non-conductive spacers. According to the estimations, use of chevron structures grants the highest net power density values, at the best compromise between the mass transfer coefficient and the pressure drop values. Finally, long-term experiments with natural waters were performed, during which fouling was experienced. For the first time, 2D fluorescence spectroscopy was used to monitor RED stack performance, with a dedicated focus on following fouling on ion-exchange membrane surfaces. To extract relevant information from fluorescence spectra, parallel factor analysis (PARAFAC) was performed. Moreover, the information obtained was then used to predict net power density, stack electric resistance and pressure drop by multivariate statistical models based on projection to latent structures (PLS) modeling. The use in such models of 2D fluorescence data, containing hidden, but extractable by PARAFAC, information about fouling on membrane surfaces, considerably improved the models fitting to the experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cataract surgery, the eye’s natural lens is removed because it has gone opaque and doesn’t allow clear vision any longer. To maintain the eye’s optical power, a new artificial lens must be inserted. Called Intraocular Lens (IOL), it needs to be modelled in order to have the correct refractive power to substitute the natural lens. Calculating the refractive power of this substitution lens requires precise anterior eye chamber measurements. An interferometry equipment, the AC Master from Zeiss Meditec, AG, was in use for half a year to perform these measurements. A Low Coherence Interferometry (LCI) measurement beam is aligned with the eye’s optical axis, for precise measurements of anterior eye chamber distances. The eye follows a fixation target in order to make the visual axis align with the optical axis. Performance problems occurred, however, at this step. Therefore, there was a necessity to develop a new procedure that ensures better alignment between the eye’s visual and optical axes, allowing a more user friendly and versatile procedure, and eventually automatizing the whole process. With this instrument, the alignment between the eye’s optical and visual axes is detected when Purkinje reflections I and III are overlapped, as the eye follows a fixation target. In this project, image analysis is used to detect these Purkinje reflections’ positions, eventually automatically detecting when they overlap. Automatic detection of the third Purkinje reflection of an eye following a fixation target is possible with some restrictions. Each pair of detected third Purkinje reflections is used in automatically calculating an acceptable starting position for the fixation target, required for precise measurements of anterior eye chamber distances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notch is a conserved signalling pathway, which plays a crucial role in a multiple cellular processes such as stem cell self-renewal, cell division, proliferation and apoptosis. In mammalian, four Notch receptors and five ligands are described, where interaction is achieved through their extracellular domains, leading to a transcription activation of different target genes. Increased expression of Notch ligands has been detected in several types of cancer, including breast cancer suggesting that these proteins represent possible therapeutic targets. The goal of this work was to generate quality protein targets and, by phage display technology, select function-blocking antibodies specific for Notch ligands. Phage display is a powerful technique that allows the generation of highly specific antibodies to be used for therapeutics, and it has also proved to be a reliable approach in identifying and validating new cancer-related targets. Also, we aimed at solving the tri-dimensional structure of the Notch ligands alone and in complex with selected antibodies. In this work, the initial phase focused on the optimization of the expression and purification of a human Delta-like 1 ligand mutant construct (hDLL1-DE3), by refolding from E. coli inclusion bodies. To confirm the biological activity of the produced recombinant protein cellular functional studies were performed, revealing that treatment with hDLL1-DE3 protein led to a modulation of Notch target genes. In a second stage of this study, Antibody fragments (Fabs) specific for hDLL1-DE3 were generated by phage display, using the produced protein as target, in which one good Fab candidate was selected to determine the best expression conditions. In parallel, multiple crystallization conditions were tested with hDLL1-DE3, but so far none led to positive results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis does not set out to focus on the dynamics relationship between Twitter and stock prices, but instead tries to understand if using relevant information extracted from tweets has the power to increase investors’ stock picking ability, and generate alpha in portfolio’s choice relative to a benchmark. Despite the short period analyzed, it gives promising results that the sentiment analysis performed by Social Market Analytics Inc. applied to an equity portfolio, is able to generate positive abnormal returns, statistically significant in and out of sample.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Taking into account the fact that the sun’s radiation is estimated to be enough to cover 10.000 times the world’s total energy needs (BRAKMANN & ARINGHOFF, 2003), it is difficult to understand how solar photovoltaic systems (PV) are still such a small part of the energy source matrix across the globe. Though there is an ongoing debate as to whether energy consumption leads to economic growth or whether it is the other way around, the two variables appear correlated and it is clear that ensuring the availability of energy to match a country’s growth targets is one of the prime concerns for any government. The topic of centralized vs distributed electricity generation is also approached, especially in what regards the latter fit to developing countries needs, namely the lack of investment capabilities and infrastructure, scattered population, and other factors. Finally, Brazil’s case is reviewed, showing that the current cost of electricity from the grid versus the cost from PV solutions still places an investment of this nature with 9 to 16 years to reach breakeven (from a 25 year panel lifespan), which is too high compared to the required 4 years for most Brazilians. Still, recently passed legislation opened the door, even if unknowingly, to the development of co-owned solar farms, which could reduce the implementation costs by as much as 20% and hence reduce the number of years to breakeven by 3 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.