850 resultados para computer-based
Resumo:
This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.
Resumo:
This paper presents automated segmentation of structuresin the Head and Neck (H\&N) region, using an activecontour-based joint registration and segmentation model.A new atlas selection strategy is also used. Segmentationis performed based on the dense deformation fieldcomputed from the registration of selected structures inthe atlas image that have distinct boundaries, onto thepatient's image. This approach results in robustsegmentation of the structures of interest, even in thepresence of tumors, or anatomical differences between theatlas and the patient image. For each patient, an atlasimage is selected from the available atlas-database,based on the similarity metric value, computed afterperforming an affine registration between each image inthe atlas-database and the patient's image. Unlike manyof the previous approaches in the literature, thesimilarity metric is not computed over the entire imageregion; rather, it is computed only in the regions ofsoft tissue structures to be segmented. Qualitative andquantitative evaluation of the results is presented.
Resumo:
The Iowa Department of Transportation is committed to improved management systems, which in turn has led to increased automation to record and manage construction data. A possible improvement to the current data management system can be found with pen-based computers. Pen-based computers coupled with user friendly software are now to the point where an individual's handwriting can be captured and converted to typed text to be used for data collection. It would appear pen-based computers are sufficiently advanced to be used by construction inspectors to record daily project data. The objective of this research was to determine: (1) if pen-based computers are durable enough to allow maintenance-free operation for field work during Iowa's construction season; and (2) if pen-based computers can be used effectively by inspectors with little computer experience. The pen-based computer's handwriting recognition was not fast or accurate enough to be successfully utilized. The IBM Thinkpad with the pen pointing device did prove useful for working in Windows' graphical environment. The pen was used for pointing, selecting and scrolling in the Windows applications because of its intuitive nature.
Resumo:
We present an agent-based model with the aim of studying how macro-level dynamics of spatial distances among interacting individuals in a closed space emerge from micro-level dyadic and local interactions. Our agents moved on a lattice (referred to as a room) using a model implemented in a computer program called P-Space in order to minimize their dissatisfaction, defined as a function of the discrepancy between the real distance and the ideal, or desired, distance between agents. Ideal distances evolved in accordance with the agent's personal and social space, which changed throughout the dynamics of the interactions among the agents. In the first set of simulations we studied the effects of the parameters of the function that generated ideal distances, and in a second set we explored how group macrolevel behavior depended on model parameters and other variables. We learned that certain parameter values yielded consistent patterns in the agents' personal and social spaces, which in turn led to avoidance and approaching behaviors in the agents. We also found that the spatial behavior of the group of agents as a whole was influenced by the values of the model parameters, as well as by other variables such as the number of agents. Our work demonstrates that the bottom-up approach is a useful way of explaining macro-level spatial behavior. The proposed model is also shown to be a powerful tool for simulating the spatial behavior of groups of interacting individuals.
Resumo:
In this paper, we present a critical analysis of the state of the art in the definition and typologies of paraphrasing. This analysis shows that there exists no characterization of paraphrasing that is comprehensive, linguistically based and computationally tractable at the same time. The following sets out to define and delimit the concept on the basis of the propositional content. We present a general, inclusive and computationally oriented typology of the linguistic mechanisms that give rise to form variations between paraphrase pairs.
Resumo:
Although various foot models were proposed for kinematics assessment using skin makers, no objective justification exists for the foot segmentations. This study proposed objective kinematic criteria to define which foot joints are relevant (dominant) in skin markers assessments. Among the studied joints, shank-hindfoot, hindfoot-midfoot and medial-lateral forefoot joints were found to have larger mobility than flexibility of their neighbour bonesets. The amplitude and pattern consistency of these joint angles confirmed their dominancy. Nevertheless, the consistency of the medial-lateral forefoot joint amplitude was lower. These three joints also showed acceptable sensibility to experimental errors which supported their dominancy. This study concluded that to be reliable for assessments using skin markers, the foot and ankle complex could be divided into shank, hindfoot, medial forefoot, lateral forefoot and toes. Kinematics of foot models with more segments must be more cautiously used.
Resumo:
Normal and abnormal brains can be segmented by registering the target image with an atlas. Here, an atlas is defined as the combination of an intensity image (template) and its segmented image (the atlas labels). After registering the atlas template and the target image, the atlas labels are propagated to the target image. We define this process as atlas-based segmentation. In recent years, researchers have investigated registration algorithms to match atlases to query subjects and also strategies for atlas construction. In this paper we present a review of the automated approaches for atlas-based segmentation of magnetic resonance brain images. We aim to point out the strengths and weaknesses of atlas-based methods and suggest new research directions. We use two different criteria to present the methods. First, we refer to the algorithms according to their atlas-based strategy: label propagation, multi-atlas methods, and probabilistic techniques. Subsequently, we classify the methods according to their medical target: the brain and its internal structures, tissue segmentation in healthy subjects, tissue segmentation in fetus, neonates and elderly subjects, and segmentation of damaged brains. A quantitative comparison of the results reported in the literature is also presented.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
The main goal of CleanEx is to provide access to public gene expression data via unique gene names. A second objective is to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and cross-data set comparisons. A consistent and up-to-date gene nomenclature is achieved by associating each single experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of human genes and genes from model organisms. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing cross-references to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resource, such as cDNA clones or Affymetrix probe sets. The web-based query interfaces offer access to individual entries via text string searches or quantitative expression criteria. CleanEx is accessible at: http://www.cleanex.isb-sib.ch/.
Resumo:
quantiNemo is an individual-based, genetically explicit stochastic simulation program. It was developed to investigate the effects of selection, mutation, recombination and drift on quantitative traits with varying architectures in structured populations connected by migration and located in a heterogeneous habitat. quantiNemo is highly flexible at various levels: population, selection, trait(s) architecture, genetic map for QTL and/or markers, environment, demography, mating system, etc. quantiNemo is coded in C++ using an object-oriented approach and runs on any computer platform. Availability: Executables for several platforms, user's manual, and source code are freely available under the GNU General Public License at http://www2.unil.ch/popgen/softwares/quantinemo.
Resumo:
Positron emission computed tomography (PET) is a functional, noninvasive method for imaging regional metabolic processes that is nowadays most often combined to morphological imaging with computed tomography (CT). Its use is based on the well-founded assumption that metabolic changes occur earlier in tumors than morphologic changes, adding another dimension to imaging. This article will review the established and investigational indications and radiopharmaceuticals for PET/CT imaging for prostate cancer, bladder cancer and testicular cancer, before presenting upcoming applications in radiation therapy.
Resumo:
Impressive developments in X-ray imaging are associated with X-ray phase contrast computed tomography based on grating interferometry, a technique that provides increased contrast compared with conventional absorption-based imaging. A new "single-step" method capable of separating phase information from other contributions has been recently proposed. This approach not only simplifies data-acquisition procedures, but, compared with the existing phase step approach, significantly reduces the dose delivered to a sample. However, the image reconstruction procedure is more demanding than for traditional methods and new algorithms have to be developed to take advantage of the "single-step" method. In the work discussed in this paper, a fast iterative image reconstruction method named OSEM (ordered subsets expectation maximization) was applied to experimental data to evaluate its performance and range of applicability. The OSEM algorithm with different subsets was also characterized by comparison of reconstruction image quality and convergence speed. Computer simulations and experimental results confirm the reliability of this new algorithm for phase-contrast computed tomography applications. Compared with the traditional filtered back projection algorithm, in particular in the presence of a noisy acquisition, it furnishes better images at a higher spatial resolution and with lower noise. We emphasize that the method is highly compatible with future X-ray phase contrast imaging clinical applications.
Resumo:
Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.
Resumo:
This report summarizes progress made in Phase 1 of the GIS-based Accident Location and Analysis System (GIS-ALAS) project. The GIS-ALAS project builds on several longstanding efforts by the Iowa Department of Transportation (DOT), law enforcement agencies, Iowa State University, and several other entities to create a locationally-referenced highway accident database for Iowa. Most notable of these efforts is the Iowa DOT’s development of a PC-based accident location and analysis system (PC-ALAS), a system that has been well received by users since it was introduced in 1989. With its pull-down menu structure, PC-ALAS is more portable and user-friendly than its mainframe predecessor. Users can obtain accident statistics for locations during specified time periods. Searches may be refined to identify accidents of specific types or involving drivers with certain characteristics. Output can be viewed on a computer screen, sent to a file, or printed using pre-defined formats.