990 resultados para Visualização de tags
Resumo:
Este trabalho apresenta um levantamento dos problemas associados à influência da observabilidade e da visualização radial no projeto de sistemas de monitoramento para redes de grande magnitude e complexidade. Além disso, se propõe a apresentar soluções para parte desses problemas. Através da utilização da Teoria de Redes Complexas, são abordadas duas questões: (i) a localização e a quantidade de nós necessários para garantir uma aquisição de dados capaz de representar o estado da rede de forma efetiva e (ii) a elaboração de um modelo de visualização das informações da rede capaz de ampliar a capacidade de inferência e de entendimento de suas propriedades. A tese estabelece limites teóricos a estas questões e apresenta um estudo sobre a complexidade do monitoramento eficaz, eficiente e escalável de redes
Resumo:
We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
open reading frame expressed sequences tags (ORESTES) differ from conventional ESTs by providing sequence data from the central protein coding portion of transcripts. We generated a total of 696,745 ORESTES sequences from 24 human tissues and used a subset of the data that correspond to a set of 15,095 full-length mRNAs as a means of assessing the efficiency of the strategy and its potential contribution to the definition of the human transcriptome. We estimate that ORESTES sampled over 80% of all highly and moderately expressed, and between 40% and 50% of rarely expressed, human genes. In our most thoroughly sequenced tissue, the breast, the 130,000 ORESTES generated are derived from transcripts from an estimated 70% of all genes expressed in that tissue, with an equally efficient representation of both highly and poorly expressed genes. In this respect, we find that the capacity of the ORESTES strategy both for gene discovery and shotgun transcript sequence generation significantly exceeds that of conventional ESTs. The distribution of ORESTES is such that many human transcripts are now represented by a scaffold of partial sequences distributed along the length of each gene product. The experimental joining of the scaffold components, by reverse transcription-PCR, represents a direct route to transcript finishing that may represent a useful alternative to full-length cDNA cloning.
Resumo:
Transcribed sequences in the human genome can be identified with confidence only by alignment with sequences derived from cDNAs synthesized from naturally occurring mRNAs. We constructed a set of 250,000 cDNAs that represent partial expressed gene sequences and that are biased toward the central coding regions of the resulting transcripts. They are termed ORF expressed sequence tags (ORESTES). The 250,000 ORESTEs were assembled into 81,429 contigs. of these, 1,181 (1.45%) were found to match sequences in chromosome 22 with at least one ORESTES contig for 162 (65.6%) of the 247 known genes, for 67 (44.6%) of the 150 related genes, and for 45 of the 148 (30.4%) EST-predicted genes on this chromosome. Using a set of stringent criteria to validate our sequences, we identified a further 219 previously unannotated transcribed sequences on chromosome 22. of these, 171 were in fact also defined by EST or full length cDNA sequences available in GenBank but not utilized in the initial annotation of the first human chromosome sequence. Thus despite representing less than 15% of all expressed human sequences in the public databases at the time of the present analysis, ORESTEs sequences defined 48 transcribed sequences on chromosome 22 not defined by other sequences. All of the transcribed sequences defined by ORESTEs coincided with DNA regions predicted as encoding exons by GENSCAN.
Resumo:
Neste trabalho aplicamos um método óptico sensível e de baixo custo denominado Schlieren para visualização e caracterização de feixes ultra-sônicos em solução aquosa. Esta caracterização assume papel importante na Medicina para diagnóstico não invasivo via método ultra-sônico.
Resumo:
As dificuldades de aprendizagem de Desenho Técnico que experimentam os estudantes de Engenharia relacionam-se com seu nível de aptidão. Para melhorar o processo didático, seria necessário detectar de imediato os estudantes que requerem mais apoio. Este estudo descreve a utilidade de um teste de Visualização Espacial e um teste de Raciocínio Indutivo para prever o rendimento dos estudantes em Desenho Técnico. A amostra foi composta por 484 estudantes do primeiro ano do Curso de Engenharia de quatro centros brasileiros de Educação Superior. Os dados foram analisados com o modelo de Rasch. Os resultados sugerem que a aptidão de Visualização Espacial é o melhor previsor.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Whereas genome sequencing defines the genetic potential of an organism, transcript sequencing defines the utilization of this potential and links the genome with most areas of biology. To exploit the information within the human genome in the fight against cancer, we have deposited some two million expressed sequence tags (ESTs) from human tumors and their corresponding normal tissues in the public databases. The data currently define approximate to23,500 genes, of which only approximate to1,250 are still represented only by ESTs. Examination of the EST coverage of known cancer-related (CR) genes reveals that <1% do not have corresponding ESTs, indicating that the representation of genes associated with commonly studied tumors is high. The careful recording of the origin of all ESTs we have produced has enabled detailed definition of where the genes they represent are expressed in the human body. More than 100,000 ESTs are available for seven tissues, indicating a surprising variability of gene usage that has led to the discovery of a significant number of genes with restricted expression, and that may thus be therapeutically useful. The ESTs also reveal novel nonsynonymous germline variants (although the one-pass nature of the data necessitates careful validation) and many alternatively spliced transcripts. Although widely exploited by the scientific community, vindicating our totally open source policy, the EST data generated still provide extensive information that remains to be systematically explored, and that may further facilitate progress toward both the understanding and treatment of human cancers.
Resumo:
Leafcutters are the highest evolved within Neotropical ants in the tribe Attini and model systems for studying caste formation, labor division and symbiosis with microorganisms. Some species of leafcutters are agricultural pests controlled by chemicals which affect other animals and accumulate in the environment. Aiming to provide genetic basis for the study of leafcutters and for the development of more specific and environmentally friendly methods for the control of pest leafcutters, we generated expressed sequence tag data from Atta laevigata, one of the pest ants with broad geographic distribution in South America. Results: The analysis of the expressed sequence tags allowed us to characterize 2,006 unique sequences in Atta laevigata. Sixteen of these genes had a high number of transcripts and are likely positively selected for high level of gene expression, being responsible for three basic biological functions: energy conservation through redox reactions in mitochondria; cytoskeleton and muscle structuring; regulation of gene expression and metabolism. Based on leafcutters lifestyle and reports of genes involved in key processes of other social insects, we identified 146 sequences potential targets for controlling pest leafcutters. The targets are responsible for antixenobiosis, development and longevity, immunity, resistance to pathogens, pheromone function, cell signaling, behavior, polysaccharide metabolism and arginine kynase activity. Conclusion: The generation and analysis of expressed sequence tags from Atta laevigata have provided important genetic basis for future studies on the biology of leaf-cutting ants and may contribute to the development of a more specific and environmentally friendly method for the control of agricultural pest leafcutters.
Resumo:
PURPOSE: To investigate the penetration (tags) of adhesive materials into enamel etched with phosphoric acid or treated with a self-etching adhesive, before application of a pit-and-fissure sealant. MATERIALS AND METHODS: The sample comprised six study groups with six specimens each. Before pit-and-fissure sealing with the materials Clinpro SealantTM (Groups I and II), Vitro Seal ALPHA (Groups III and IV) and Fuji II LC (Groups V and VI), the teeth in Groups I, III, and V were etched with 35% phosphoric acid for 30 seconds. Teeth in Groups II, IV, and VI received application of the self-etching adhesive Adper Prompt L-Pop. The treated teeth were sectioned buccolingually, ground to 100-microm thickness, decalcified, and analyzed by conventional light microscopy at 400x magnification. RESULTS: The teeth etched with phosphoric acid exhibited significantly greater penetration than specimens treated with self-etching adhesive. CLINICAL SIGNIFICANCE: When compared with enamel treated with a self-etching adhesive, the penetration (tags) of adhesive materials into enamel was greater when applied on enamel etched with phosphoric acid.
Resumo:
The objective of this study was to measure the thickness of the hybrid layer (HLT), length of resin tags (RTL) and bond strength (BS) in the same teeth, using a self-etching adhesive system Adper Prompt L Pop to intact dentin and to analyze the correlation between HLTand RTL and their BS. Ten human molars were used for the restorative procedures and each restored tooth was sectioned in mesio-distal direction. One section was submitted to light microscopy analysis of HLT and RTL (400x). Another section was prepared and submitted to the microtensile bond test (0.5 mm/min). The fractured surfaces were analyzed using scanning electron microscopy to determine the failure pattern. Correlation between HLT and RTL with the BS data was analyzed by linear regression. The mean values of HLT, RTL and BS were 3.36 microm, 12.97 microm and 14.10 MPa, respectively. No significant relationship between BS and HLT (R2= 0.011, p>0.05) and between BS and RTL (R2= 0.038) was observed. The results suggested that there was no significant correlation between the HLT and RTL with the BS of the self-etching adhesive to dentin.
Resumo:
This experimental light microscopy study investigated the formation of a hybrid layer and resin tags on sound dentin, after utilization of conventional and self-etching adhesive systems. After restorative procedures, the specimens were decalcified in a formic acid and sodium citrate solution, embedded in paraffin, sectioned at 6-microm thickness and stained by the Brown & Brenn method for analysis and measurement by light microscopy (AXIOPHOT) (400x). The results were statistically analyzed by analysis of variance, at a significance level of 5%. Based on the results, it could be concluded that the conventional adhesive allowed the formation of a thicker hybrid layer than the self-etching adhesive, with similar penetration into the dentinal tubules (resin tags).