996 resultados para Similarity Evaluation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One way to achieve the large sample sizes required for genetic studies of complex traits is to combine samples collected by different groups. It is not often clear, however, whether this practice is reasonable from a genetic perspective. To assess the comparability of samples from the Australian and the Netherlands twin studies, we estimated F,, (the proportion of total genetic variability attributable to genetic differences between cohorts) based on 359 short tandem repeat polymorphisms in 1068 individuals. IF,, was estimated to be 0.30% between the Australian and the Netherlands cohorts, a smaller value than between many European groups. We conclude that it is reasonable to combine the Australian and the Netherlands samples for joint genetic analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

More than 90% of birds are socially monogamous, although genetic studies indicate that many are often not sexually monogamous. In the present study, DNA fingerprinting was used to estimate the genetic relationships between nestlings belonging to the same broods to evaluate the mating system in the socially monogamous macaw, Ara ararauna. We found that in 10 of 11 broods investigated, the nestlings showed genetic similarity levels congruent with values expected among full-sibs, suggesting that they shared the same parents. However, in one brood, the low genetic similarity observed between nestlings could be a result of intraspecific brood parasitism, intraspecific nest competition or extra-pair paternity. These results, along with available behavioral and life-history data, imply that the blue-and-yellow macaw is not only socially, but also genetically monogamous. However, the occurrence of eventual cases of extra-pair paternity cannot be excluded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To evaluate drug interaction software programs and determine their accuracy in identifying drug-drug interactions that may occur in intensive care units. Setting The study was developed in Brazil. Method Drug interaction software programs were identified through a bibliographic search in PUBMED and in LILACS (database related to the health sciences published in Latin American and Caribbean countries). The programs` sensitivity, specificity, and positive and negative predictive values were determined to assess their accuracy in detecting drug-drug interactions. The accuracy of the software programs identified was determined using 100 clinically important interactions and 100 clinically unimportant ones. Stockley`s Drug Interactions 8th edition was employed as the gold standard in the identification of drug-drug interaction. Main outcome Sensitivity, specificity, positive and negative predictive values. Results The programs studied were: Drug Interaction Checker (DIC), Drug-Reax (DR), and Lexi-Interact (LI). DR displayed the highest sensitivity (0.88) and DIC showed the lowest (0.69). A close similarity was observed among the programs regarding specificity (0.88-0.92) and positive predictive values (0.88-0.89). The DIC had the lowest negative predictive value (0.75) and DR the highest (0.91). Conclusion The DR and LI programs displayed appropriate sensitivity and specificity for identifying drug-drug interactions of interest in intensive care units. Drug interaction software programs help pharmacists and health care teams in the prevention and recognition of drug-drug interactions and optimize safety and quality of care delivered in intensive care units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experiments examined the effects of interpersonal and group-based similarity on perceived self-other differences in persuasibility (i.e. on third-person effects, Davison, 1983). Results of Experiment 1 (N=121), based on experimentally-created groups, indicated that third-person perceptions with respect to the impact of televised product ads were accentuated when the comparison was made with interpersonally different others. Contrary to predictions, third-person perceptions were not affected by group-based similarity (i.e. ingroup or outgroup other). Results of Experiment 2 (N = 102), based an an enduring social identity, indicated that both interpersonal and group-based similarity moderated perceptions of the impact on self and other of least-liked product ads. Overall, third-person effects were more pronounced with respect to interpersonally dissimilar others. However, when social identity was salient, information about interpersonal similarity of the target did not affect perceived self-other differences with respect to ingroup targets. Results also highlighted significant differences in third-person perceptions according to the perceiver's affective evaluation of the persuasive message. (C) 1998 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mice transcutaneously infected with about 400 cercariae were submitted to treatment with oxamniquine (400 mg/kg), 24 hours after infection. The recovery of schistosomules, at 4, 24, 48 and 72 hours and 35 days after treatment, showed the activity of the drug on the parasites, thus practically preventing their migration from the skin to the lungs. Worm recovery performed in the lungs (96 hours after treatment) showed recovery means of 0.6 worms/mouse in the treated group and 53.8 in the control group (untreated). The perfusion of the portal system carried out at 35 days after treatment clearly showed the elimination of all the parasites in the treated group, whereas a recovery mean of 144.7 worms/mouse was detected in the control group (untreated). These findings confirm the efficacy of oxamniquine at the skin phase of infection, and also show similarity with the immunization method that uses irradiated cercariae. The practical application of these findings in the medical clinic is discussed too

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Domestic dogs are the most important reservoir in the peridomestic transmission cycle of Leishmania (Leishmania) chagasi. The genetic variability of subpopulations of this parasite circulating in dogs has not been thoroughly analyzed in Brazil, even though this knowledge has important implications in the clinical-epidemiological context. METHODS: The objective of this study was to evaluate and compare the phenotypic variability of 153 L. chagasi strains isolated from dogs originating from the municipalities of Rio de Janeiro (n = 57) and Belo Horizonte (n = 96), where the disease is endemic. Strains isolated only from intact skin were selected and analyzed by multilocus enzyme electrophoresis using nine enzyme systems (6PG, GPI, NH1 and NH2, G6P, PGM, MDH, ME, and IDHNADP). RESULTS: The electrophoretic profile was identical for all isolates analyzed and was the same as that of the L. chagasi reference strain (MHOM/BR/74/PP75). Phenetic analysis showed a similarity index of one for all strains, with the isolates sharing 100% of the characteristics analyzed. CONCLUSIONS: The results demonstrate that the L. chagasi populations circulating in dogs from Rio de Janeiro and Belo Horizonte belong to a single zymodeme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study it was compared the MAS-100 and the Andersen air samplers' performances and a similar trend in both instruments was observed. It was also evaluated the microbial contamination levels in 3060 samples of offices, hospitals, industries, and shopping centers, in the period of 1998 to 2002, in Rio de Janeiro city. Considering each environment, 94.3 to 99.4% of the samples were the allowed limit in Brazil (750 CFU/m³). The industries' results showed more important similarity among fungi and total heterotrophs distributions, with the majority of the results between zero and 100 CFU/m³. The offices' results showed dispersion around 300 CFU/m³. The hospitals' results presented the same trend, with an average of 200 CFU/m³. Shopping centers' environments showed an average of 300 CFU/m³ for fungi, but presented a larger dispersion pattern for the total heterotrophs, with the highest average (1000 CFU/m³). It was also investigated the correlation of the sampling period with the number of airborne microorganisms and with the environmental parameters (temperature and air humidity) through the principal components analysis. All indoor air samples distributions were very similar. The temperature and air humidity had no significant influence on the samples dispersion patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a joint effort between five institutionsthat introduces several novel similarity measures andcombines them to carry out a multimodal segmentationevaluation. The new similarity measures proposed arebased on the location and the intensity values of themisclassified voxels as well as on the connectivity andthe boundaries of the segmented data. We showexperimentally that the combination of these measuresimprove the quality of the evaluation. The study that weshow here has been carried out using four differentsegmentation methods from four different labs applied toa MRI simulated dataset of the brain. We claim that ournew measures improve the robustness of the evaluation andprovides better understanding about the differencebetween segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.