994 resultados para Text summarisation tool


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To comparatively detect A. actinomycetemcomitans and F. nucleatum from periodontal and healthy sites. METHODS: Subgingival clinical samples from 50 periodontitis adult patients and 50 healthy subjects were analyzed. Both organisms were isolated using a trypticase soy agar-bacitracin-vancomycin (TSBV) medium and detected by PCR. Conventional biochemical tests were used for bacteria identification. RESULTS: A. actinomycetemcomitans and F. nucleatum were isolated in 18% and 20% of the patients, respectively, and in 2% and 24% of healthy subjects. Among A. actinomycetemcomitans isolates, biotype II was the most prevalent. Primer pair AA was 100% sensitive in the detection of A. actinomycetemcomitans from both subject groups. Primers ASH and FU were also 100% sensitive to detect this organism in healthy subject samples. Primer pair FN5047 was more sensitive to detect F. nucleatum in patients or in healthy samples than primer 5059S. Primers ASH and 5059S were more specific in the detection of A. actinomycetemcomitans and F. nucleatum, respectively, in patients and in healthy subject samples. CONCLUSIONS: PCR is an effective tool for detecting periodontal pathogens in subgingival samples, providing a faster and safer diagnostic tool of periodontal diseases. The method's sensitivity and specificity is conditioned by the choice of the set of primers used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a swarm intelligence long-term hedging tool to support electricity producers in competitive electricity markets. This tool investigates the long-term hedging opportunities available to electric power producers through the use of contracts with physical (spot and forward) and financial (options) settlement. To find the optimal portfolio the producer risk preference is stated by a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance estimation and the expected return are based on a forecasted scenario interval determined by a long-term price range forecast model, developed by the authors, whose explanation is outside the scope of this paper. The proposed tool makes use of Particle Swarm Optimization (PSO) and its performance has been evaluated by comparing it with a Genetic Algorithm (GA) based approach. To validate the risk management tool a case study, using real price historical data for mainland Spanish market, is presented to demonstrate the effectiveness of the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neonatal anthropometry is an inexpensive, noninvasive and convenient tool for bedside evaluation, especially in sick and fragile neonates. Anthropometry can be used in neonates as a tool for several purposes: diagnosis of foetal malnutrition and prediction of early postnatal complications; postnatal assessment of growth, body composition and nutritional status; prediction of long-term complications including metabolic syndrome; assessment of dysmorphology; and estimation of body surface. However, in this age group anthropometry has been notorious for its inaccuracy and the main concern is to make validated indices available. Direct measurements, such as body weight, length and body circumferences are the most commonly used measurements for nutritional assessment in clinical practice and in field studies. Body weight is the most reliable anthropometric measurement and therefore is often used alone in the assessment of the nutritional status, despite not reflecting body composition. Derived indices from direct measurements have been proposed to improve the accuracy of anthropometry. Equations based on body weight and length, mid-arm circumference/head circumference ratio, and upper-arm cross-sectional areas are among the most used derived indices to assess nutritional status and body proportionality, even though these indices require further validation for the estimation of body composition in neonates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the optimal involvement in derivatives electricity markets of a power producer to hedge against the pool price volatility. To achieve this aim, a swarm intelligence meta-heuristic optimization technique for long-term risk management tool is proposed. This tool investigates the long-term opportunities for risk hedging available for electric power producers through the use of contracts with physical (spot and forward contracts) and financial (options contracts) settlement. The producer risk preference is formulated as a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance of return and the expectation are based on a forecasted scenario interval determined by a long-term price range forecasting model. This model also makes use of particle swarm optimization (PSO) to find the best parameters allow to achieve better forecasting results. On the other hand, the price estimation depends on load forecasting. This work also presents a regressive long-term load forecast model that make use of PSO to find the best parameters as well as in price estimation. The PSO technique performance has been evaluated by comparison with a Genetic Algorithm (GA) based approach. A case study is presented and the results are discussed taking into account the real price and load historical data from mainland Spanish electricity market demonstrating the effectiveness of the methodology handling this type of problems. Finally, conclusions are dully drawn.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the PCMAT platform project and, in particular, one of its components, the PCMAT Metadata Authoring Tool. This is an educational web application that allows the project metadata creators to write the metadata associated to each learning object without any concern for the metadata schema semantics. Furthermore it permits the project managers to add or delete elements to the schema, without having to rewrite or compile any code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To assess infestation levels of Aedes aegypti using the oviposition trap (ovitrap) method and to compare these results with data obtained with the use of indices traditionally applied in public programs aimed at fighting this vector. METHODS: Nine sentinel areas in Northeastern, Brazil, were assessed and infestation levels were measured for a nine-month period. Egg density and container indices were estimated and compared with previous results found using the house index and Breteau index. RESULTS: The results indicated that the area studied was infested with this vector during the entire study period and that the infestation was widespread in all areas. Different results were found with the different indices studied. There were areas in which the house index and the Breteau index were negative or close to zero, whereas the container index for the same area was 11% and the egg density index was 8.3%. CONCLUSIONS: The container and egg density indices allow better assessment of infestation rates in a city than the conventionally used indices (house index and Breteau index). At lower operational costs and easier standardization, these indices can be applied as a measurement tool for assessing infestation rates during entomological surveillance in programs to fight Aedes aegypti.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today, business group decision making is an extremely important activity. A considerable number of applications and research have been made in the past years in order to increase the effectiveness of decision making process. In order to support the idea generation process, IGTAI (Idea Generation Tool for Ambient Intelligence) prototype was created. IGTAI is a Group Decision Support System designed to support any kind of meetings namely distributed, asynchronous or face to face. It aims at helping geographically distributed (or not) people and organizations in the idea generation task, by making use of pervasive hardware in a meeting room, expanding the meeting beyond the room walls by allowing a ubiquitous access through different kinds of equipment. This paper focus on the research made to build IGTAI prototype, its architecture and its main functionalities, namely the support given in the different phases of the idea generation meeting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A common task in analyzing microarray data is to determine which genes are differentially expressed across two (or more) kind of tissue samples or samples submitted under experimental conditions. Several statistical methods have been proposed to accomplish this goal, generally based on measures of distance between classes. It is well known that biological samples are heterogeneous because of factors such as molecular subtypes or genetic background that are often unknown to the experimenter. For instance, in experiments which involve molecular classification of tumors it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures. Consequently, there can be genes differentially expressed on sample subgroups which are missed if usual statistical approaches are used. In this paper we propose a new graphical tool which not only identifies genes with up and down regulations, but also genes with differential expression in different subclasses, that are usually missed if current statistical methods are used. This tool is based on two measures of distance between samples, namely the overlapping coefficient (OVL) between two densities and the area under the receiver operating characteristic (ROC) curve. The methodology proposed here was implemented in the open-source R software. Results: This method was applied to a publicly available dataset, as well as to a simulated dataset. We compared our results with the ones obtained using some of the standard methods for detecting differentially expressed genes, namely Welch t-statistic, fold change (FC), rank products (RP), average difference (AD), weighted average difference (WAD), moderated t-statistic (modT), intensity-based moderated t-statistic (ibmT), significance analysis of microarrays (samT) and area under the ROC curve (AUC). On both datasets all differentially expressed genes with bimodal or multimodal distributions were not selected by all standard selection procedures. We also compared our results with (i) area between ROC curve and rising area (ABCR) and (ii) the test for not proper ROC curves (TNRC). We found our methodology more comprehensive, because it detects both bimodal and multimodal distributions and different variances can be considered on both samples. Another advantage of our method is that we can analyze graphically the behavior of different kinds of differentially expressed genes. Conclusion: Our results indicate that the arrow plot represents a new flexible and useful tool for the analysis of gene expression profiles from microarrays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present study we focus on the interaction between the acquisition of new words and text organisation. In the acquisition of new words we emphasise the acquisition of paradigmatic relations such as hyponymy, meronymy and semantic sets. We work with a group of girls attending a private school for adolescents in serious difficulties. The subjects are from disadvantaged families. Their writing skills were very poor. When asked to describe a garden, they write a short text of a single paragraph, the lexical items were generic, there were no adjectives, and all of them use mainly existential verbs. The intervention plan assumed that subjects must to be exposed to new words, working out its meaning. In presence of referents subjects were taught new words making explicit the intended relation of the new term to a term already known. In the classroom subjects were asked to write all the words they knew drawing the relationships among them. They talk about the words specifying the relation making explicit pragmatic directions like is a kind of, is a part of or are all x. After that subjects were exposed to the task of choosing perspective. The work presented in this paper accounts for significant differences in the text of the subjects before and after the intervention. While working new words subjects were organising their lexicon and learning to present a whole entity in perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep Ocean Species. The little that is known mostly comes from collected specimens. L.A. Rocha et al. Letter "Specimen collection: An essential tool" (23 May, 344: 814) brilliantly discuss the importance of specimen collection and present the evolution of collecting since the mid-19th century until our present strict codes and conducts. However, it is also important to emphasize the fact that the vast majority of deep ocean macro-organisms are only known to us because of collection and this is a strong argument that should be present in our actions as scientists. If the deep is considered the least known of Earth’s habitats (1% or so according to recent estimates) then what awesome collection of yet to discover species are still there to be properly described? As the authors point citing (1), something around 86% of species remain unknown. Voucher specimens are fundamental for the reasons pointed out and perhaps the vast depths of the World’s oceans are the best example of that importance. The resumed report of 2010 Census of Marine Life (2) showed that among the millions of specimens collected in both familiar and seldom-explored waters, the Census found more than 6,000 potentially new species and completed formal descriptions of more than 1,200 of them. It also found that a number of rare species are in fact common. Voucher specimens are essential and, again agreeing with L.A. Rocha et al. Letter (see above), the modern approach for collecting will not be a cause for extinctions but instead a valuable tool for knowledge, description and even, as seen above, a way to find out that supposed rare species may not be that rare and even prove to reach abundant populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Tradução e Interpretação Especializadas, sob orientação do Mestre Alberto Couto.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In studies assessing the effects of a given exposure variable and a specific outcome of interest, confusion may arise from the mistaken impression that the exposure variable is producing the outcome of interest, when in fact the observed effect is due to an existing confounder. However, quantitative techniques are rarely used to determine the potential influence of unmeasured confounders. Sensitivity analysis is a statistical technique that allows to quantitatively measuring the impact of an unmeasured confounding variable on the association of interest that is being assessed. The purpose of this study was to make it feasible to apply two sensitivity analysis methods available in the literature, developed by Rosenbaum and Greenland, using an electronic spreadsheet. Thus, it can be easier for researchers to include this quantitative tool in the set of procedures that have been commonly used in the stage of result validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liver steatosis is a common disease usually associated with social and genetic factors. Early detection and quantification is important since it can evolve to cirrhosis. In this paper, a new computer-aided diagnosis (CAD) system for steatosis classification, in a local and global basis, is presented. Bayes factor is computed from objective ultrasound textural features extracted from the liver parenchyma. The goal is to develop a CAD screening tool, to help in the steatosis detection. Results showed an accuracy of 93.33%, with a sensitivity of 94.59% and specificity of 92.11%, using the Bayes classifier. The proposed CAD system is a suitable graphical display for steatosis classification.