13 resultados para Visualization Using Computer Algebra Tools

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The purpose of this study was to analyse the use of digital tools for image enhancement of mandibular radiolucent lesions and the effects of this manipulation on the percentage of correct radiographic diagnoses. Methods: 24 panoramic radiographs exhibiting radiolucent lesions were selected, digitized and evaluated by non-experts (undergraduate and newly graduated practitioners) and by professional experts in oral diagnosis. The percentages of correct and incorrect diagnoses, according to the use of brightness/contrast, sharpness, inversion, highlight and zoom tools, were compared. All dental professionals made their evaluations without (T-1) and with (T-2) a list of radiographic diagnostic parameters. Results: Digital tools were used with low frequency mainly in T-2. The most preferred tool was sharpness (45.2%). In the expert group, the percentage of correct diagnoses did not change when any of the digital tools were used. For the non-expert group, there was an increase in the frequency of correct diagnoses when brightness/contrast was used in T-2 (p = 0.008) and when brightness/contrast and sharpness were not used in T-1 (p = 0.027). The use or non-use of brightness/contrast, zoom and sharpness showed moderate agreement in the group of experts [kappa agreement coefficient (kappa) = 0.514, 0.425 and 0.335, respectively]. For the non-expert group there was slight agreement for all the tools used (kappa <= 0.237). Conclusions: Consulting the list of radiographic parameters before image manipulation reduced the frequency of tool use in both groups of examiners. Consulting the radiographic parameters with the use of some digital tools was important for improving correct diagnosis only in the group of non-expert examiners. Dentomaxillofacial Radiology (2012) 41, 203-210. doi: 10.1259/dmfr/78567773

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Double-stranded pBS plasmid DNA was irradiated with gamma rays at doses ranging from 1 to 12 kGy and electron beams from 1 to 10 kGy. Fragment-size distributions were determined by direct visualization, using atomic force microscopy with nanometer-resolution operating in non-tapping mode, combined with an improved methodology. The fragment distributions from irradiation with gamma rays revealed discrete-like patterns at all doses, suggesting that these patterns are modulated by the base pair composition of the plasmid. Irradiation with electron beams, at very high dose rates, generated continuous distributions of highly shattered DNA fragments, similar to results at much lower dose rates found in the literature. Altogether, these results indicate that AFM could supplement traditional methods for high-resolution measurements of radiation damage to DNA, while providing new and relevant information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The integration of sequencing and gene interaction data and subsequent generation of pathways and networks contained in databases such as KEGG Pathway is essential for the comprehension of complex biological processes. We noticed the absence of a chart or pathway describing the well-studied preimplantation development stages; furthermore, not all genes involved in the process have entries in KEGG Orthology, important information for knowledge application with relation to other organisms. Results: In this work we sought to develop the regulatory pathway for the preimplantation development stage using text-mining tools such as Medline Ranker and PESCADOR to reveal biointeractions among the genes involved in this process. The genes present in the resulting pathway were also used as seeds for software developed by our group called SeedServer to create clusters of homologous genes. These homologues allowed the determination of the last common ancestor for each gene and revealed that the preimplantation development pathway consists of a conserved ancient core of genes with the addition of modern elements. Conclusions: The generation of regulatory pathways through text-mining tools allows the integration of data generated by several studies for a more complete visualization of complex biological processes. Using the genes in this pathway as “seeds” for the generation of clusters of homologues, the pathway can be visualized for other organisms. The clustering of homologous genes together with determination of the ancestry leads to a better understanding of the evolution of such process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broad goals of verifiable visualization rely on correct algorithmic implementations. We extend a framework for verification of isosurfacing implementations to check topological properties. Specifically, we use stratified Morse theory and digital topology to design algorithms which verify topological invariants. Our extended framework reveals unexpected behavior and coding mistakes in popular publicly available isosurface codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use computer algebra to study polynomial identities for the trilinear operation [a, b, c] = abc - acb - bac + bca + cab - cba in the free associative algebra. It is known that [a, b, c] satisfies the alternating property in degree 3, no new identities in degree 5, a multilinear identity in degree 7 which alternates in 6 arguments, and no new identities in degree 9. We use the representation theory of the symmetric group to demonstrate the existence of new identities in degree 11. The only irreducible representations of dimension <400 with new identities correspond to partitions 2(5), 1 and 2(4), 1(3) and have dimensions 132 and 165. We construct an explicit new multilinear identity for partition 2(5), 1 and we demonstrate the existence of a new non-multilinear identity in which the underlying variables are permutations of a(2)b(2)c(2)d(2)e(2) f.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We apply Kolesnikov's algorithm to obtain a variety of nonassociative algebras defined by right anticommutativity and a "noncommutative" version of the Malcev identity. We use computer algebra to verify that these identities are equivalent to the identities of degree up to 4 satisfied by the dicommutator in every alternative dialgebra. We extend these computations to show that any special identity for Malcev dialgebras must have degree at least 7. Finally, we introduce a trilinear operation which makes any Malcev dialgebra into a Leibniz triple system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim Estimates of geographic range size derived from natural history museum specimens are probably biased for many species. We aim to determine how bias in these estimates relates to range size. Location We conducted computer simulations based on herbarium specimen records from localities ranging from the southern United States to northern Argentina. Methods We used theory on the sampling distribution of the mean and variance to develop working hypotheses about how range size, defined as area of occupancy (AOO), was related to the inter-specific distribution of: (1) mean collection effort per area across the range of a species (MC); (2) variance in collection effort per area across the range of a species (VC); and (3) proportional bias in AOO estimates (PBias: the difference between the expected value of the estimate of AOO and true AOO, divided by true AOO). We tested predictions from these hypotheses using computer simulations based on a dataset of more than 29,000 herbarium specimen records documenting occurrences of 377 plant species in the tribe Bignonieae (Bignoniaceae). Results The working hypotheses predicted that the mean of the inter-specific distribution of MC, VC and PBias were independent of AOO, but that the respective variance and skewness decreased with increasing AOO. Computer simulations supported all but one prediction: the variance of the inter-specific distribution of VC did not decrease with increasing AOO. Main conclusions Our results suggest that, despite an invariant mean, the dispersion and symmetry of the inter-specific distribution of PBias decreases as AOO increases. As AOO increased, range size was less severely underestimated for a large proportion of simulated species. However, as AOO increased, range size estimates having extremely low bias were less common.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. Methodology/Principal Findings: To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. Conclusions/Significance: The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background HCV is prevalent throughout the world. It is a major cause of chronic liver disease. There is no effective vaccine and the most common therapy, based on Peginterferon, has a success rate of ~50%. The mechanisms underlying viral resistance have not been elucidated but it has been suggested that both host and virus contribute to therapy outcome. Non-structural 5A (NS5A) protein, a critical virus component, is involved in cellular and viral processes. Methods The present study analyzed structural and functional features of 345 sequences of HCV-NS5A genotypes 1 or 3, using in silico tools. Results There was residue type composition and secondary structure differences between the genotypes. In addition, second structural variance were statistical different for each response group in genotype 3. A motif search indicated conserved glycosylation, phosphorylation and myristoylation sites that could be important in structural stabilization and function. Furthermore, a highly conserved integrin ligation site was identified, and could be linked to nuclear forms of NS5A. ProtFun indicated NS5A to have diverse enzymatic and nonenzymatic activities, participating in a great range of cell functions, with statistical difference between genotypes. Conclusion This study presents new insights into the HCV-NS5A. It is the first study that using bioinformatics tools, suggests differences between genotypes and response to therapy that can be related to NS5A protein features. Therefore, it emphasizes the importance of using bioinformatics tools in viral studies. Data acquired herein will aid in clarifying the structure/function of this protein and in the development of antiviral agents.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.