230 resultados para Image simulations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gel electrophoresis can be used to separate nicked circular DNA molecules of equal length but forming different knot types. At low electric fields, complex knots drift faster than simpler knots. However, at high electric field the opposite is the case and simpler knots migrate faster than more complex knots. Using Monte Carlo simulations we investigate the reasons of this reversal of relative order of electrophoretic mobility of DNA molecules forming different knot types. We observe that at high electric fields the simulated knotted molecules tend to hang over the gel fibres and require passing over a substantial energy barrier to slip over the impeding gel fibre. At low electric field the interactions of drifting molecules with the gel fibres are weak and there are no significant energy barriers that oppose the detachment of knotted molecules from transverse gel fibres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attempts to use a stimulated echo acquisition mode (STEAM) in cardiac imaging are impeded by imaging artifacts that result in signal attenuation and nulling of the cardiac tissue. In this work, we present a method to reduce this artifact by acquiring two sets of stimulated echo images with two different demodulations. The resulting two images are combined to recover the signal loss and weighted to compensate for possible deformation-dependent intensity variation. Numerical simulations were used to validate the theory. Also, the proposed correction method was applied to in vivo imaging of normal volunteers (n = 6) and animal models with induced infarction (n = 3). The results show the ability of the method to recover the lost myocardial signal and generate artifact-free black-blood cardiac images.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cells, DNA is routinely subjected to significant levels of bending and twisting. In some cases, such as under physiological levels of supercoiling, DNA can be so highly strained, that it transitions into non-canonical structural conformations that are capable of relieving mechanical stress within the template. DNA minicircles offer a robust model system to study stress-induced DNA structures. Using DNA minicircles on the order of 100 bp in size, we have been able to control the bending and torsional stresses within a looped DNA construct. Through a combination of cryo-EM image reconstructions, Bal31 sensitivity assays and Brownian dynamics simulations, we have been able to analyze the effects of biologically relevant underwinding-induced kinks in DNA on the overall shape of DNA minicircles. Our results indicate that strongly underwound DNA minicircles, which mimic the physical behavior of small regulatory DNA loops, minimize their free energy by undergoing sequential, cooperative kinking at two sites that are located about 180° apart along the periphery of the minicircle. This novel form of structural cooperativity in DNA demonstrates that bending strain can localize hyperflexible kinks within the DNA template, which in turn reduces the energetic cost to tightly loop DNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three-dimensional imaging and quantification of myocardial function are essential steps in the evaluation of cardiac disease. We propose a tagged magnetic resonance imaging methodology called zHARP that encodes and automatically tracks myocardial displacement in three dimensions. Unlike other motion encoding techniques, zHARP encodes both in-plane and through-plane motion in a single image plane without affecting the acquisition speed. Postprocessing unravels this encoding in order to directly track the 3-D displacement of every point within the image plane throughout an entire image sequence. Experimental results include a phantom validation experiment, which compares zHARP to phase contrast imaging, and an in vivo study of a normal human volunteer. Results demonstrate that the simultaneous extraction of in-plane and through-plane displacements from tagged images is feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous reports presented a histogram model to simulate the evolution of populations of individuals classified into bins according to an unspecified, quantifiable phenotypic character, and whose number in each bin changed generation after generation under the influence of fitness, while the total population was maintained constant. The histogram model also allowed Shannon entropy (SE) to be monitored continuously as the information content of the total population decreased or increased. Here, a simple Perl (Practical Extraction and Reporting Language) application was developed to carry out these computations, with the critical feature of an added random factor in the percent of individuals whose offspring moved to a vicinal bin. The results of the simulations demonstrate that the random factor mimicking variation increased considerably the range of values covered by Shannon entropy, especially when the percentage of changed offspring was high. This increase in information content is interpreted as facilitated adaptability of the population.