920 resultados para decoupled image-based visual servoing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report gives a comprehensive and up-to-date review of Alzheimer's disease biomarkers. Recent years have seen significant advances in this field. Whilst considerable effort has focused on A�_ and tau related markers, a substantial number of other molecules have been identified, that may offer new opportunities.This Report : Identifies 60 candidate Alzheimer's (AD) biomarkers and their associated studies. Of these, 49 are single species or single parameters, 7 are combinations or panels and 4 involve the measurement of two species or parameters or their ratios. These include proteins (n=34), genes (n=11), image-based parameters (n=7), small molecules (n=3), proteins + genes (n=2) and others (n=3). Of these, 30 (50%) relate to species identified in CSF and 19 (32%) were found in the blood. These candidate may be classified on the basis of their diagnostic utility, namely those which i) may allow AD to be detected when the disease has developed (48 of 75†= 64%), ii) may allow early detection of AD (18 of 75† = 24%) and iii) may allow AD to be predicted before the disease has begun to develop (9 of 75†= 12%). † Note: Of these, 11 were linked to two or more of these capabilities (e.g. allowed both early-stage detection as well as diagnosis after the disease has developed).Biomarkers: AD biomarkers identified in this report show significant diversity, however of the 60 described, 18 (30%) are associated with amyloid beta (A�_) and 9 (15%) relate to Tau. The remainder of the biomarkers (just over half) fall into a number of different groups. Of these, some are associated with other hypotheses on the pathogenesis of AD however the vast majority are individually unique and not obviously linked with other markers. Analysis and discussion presented in this report includes summaries of the studies and clinical trials that have lead to the identification of these markers. Where it has been calculated, diagnostic sensitivity, specificity and the capacity of these markers to differentiate patients with suspected AD from healthy controls and individuals believed to be suffering from other neurodegenerative conditions, have been indicated. These findings are discussed in relation to existing hypotheses on the pathogenesis of the AD and the current drug development pipeline. Many uncertainties remain in relation to the pathogenesis of AD, in diagnosing and treating the disease and many of the studies carried out to identify disease markers are at an early stage and will require confirmation through larger and longer investigations. Nevertheless, significant advances in the identification of AD biomarkers have now been made. Moreover, whilst much of the research on AD biomarkers has focused on amyloid and tau related species, it is evident that a substantial number of other species may provide important opportunities.Purpose of Report: To provide a comprehensive review of important and recently discovered candidate biomarkers of AD, in particular those with potential to reliably detect the disease or with utility in clinical development, drug repurposing, in studies of the pathogenesis and in monitoring drug response and the course of the disease. Other key goals were to identify markers that support current pipeline developments, indicate new potential drug targets or which advance understanding of the pathogenesis of this disease.Drug Repurposing: Studies of the pathogenesis of AD have identified aberrant changes in a number of other disease areas including inflammation, diabetes, oxidative stress, lipid metabolism and others. These findings have prompted studies to evaluate some existing approved drugs to treat AD. This report identifies studies of 9 established drug classes currently being investigated for potential repurposing.Alzheimer’s Disease: In 2005, the global prevalence of dementia was estimated at 25 million, with more than 4 million new cases occurring each year. It is also calculated that the number of people affected will double every 20 years, to 80 million by 2040, if a cure is not found. More than 50% of dementia cases are due to AD. Today, approximately 5 million individuals in the US suffer from AD, representing one in eight people over the age of 65. Direct and indirect costs of AD and other forms of dementia in the US are around $150 billion annually. Worldwide, costs for dementia care are estimated at $315 billion annually. Despite significant research into this debilitating and ultimately fatal disease, advances in the development of diagnostic tests for AD and moreover, effective treatments, remain elusive.Background: Alzheimer's disease is the most common cause of dementia, yet its clinical diagnosis remains uncertain until an eventual post-mortem histopathology examination is carried out. Currently, therapy for patients with Alzheimer disease only treats the symptoms; however, it is anticipated that new disease-modifying drugs will soon become available. The urgency for new and effective treatments for AD is matched by the need for new tests to detect and diagnose the condition. Uncertainties in the diagnosis of AD mean that the disease is often undiagnosed and under treated. Moreover, it is clear that clinical confirmation of AD, using cognitive tests, can only be made after substantial neuronal cell loss has occurred; a process that may have taken place over many years. Poor response to current therapies may therefore, in part, reflect the fact that such treatments are generally commenced only after neuronal damage has occurred. The absence of tests to detect or diagnose presymptomatic AD also means that there is no standard that can be applied to validate experimental findings (e.g. in drug discovery) without performing lengthy studies, and eventual confirmation by autopsy.These limitations are focusing considerable effort on the identification of biomarkers that advance understanding of the pathogenesis of AD and how the disease can be diagnosed in its early stages and treated. It is hoped that developments in these areas will help physicians to detect AD and guide therapy before the first signs of neuronal damage appears. The last 5-10 years have seen substantial research into the pathogenesis of AD and this has lead to the identification of a substantial number of AD biomarkers, which offer important insights into this disease. This report brings together the latest advances in the identification of AD biomarkers and analyses the opportunities they offer in drug R&D and diagnostics.��

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patient-specific simulations of the hemodynamics in intracranial aneurysms can be constructed by using image-based vascular models and CFD techniques. This work evaluates the impact of the choice of imaging technique on these simulations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Transient balanced steady-state free-precession (bSSFP) has shown substantial promise for noninvasive assessment of coronary arteries but its utilization at 3.0 T and above has been hampered by susceptibility to field inhomogeneities that degrade image quality. The purpose of this work was to refine, implement, and test a robust, practical single-breathhold bSSFP coronary MRA sequence at 3.0 T and to test the reproducibility of the technique. METHODS: A 3D, volume-targeted, high-resolution bSSFP sequence was implemented. Localized image-based shimming was performed to minimize inhomogeneities of both the static magnetic field and the radio frequency excitation field. Fifteen healthy volunteers and three patients with coronary artery disease underwent examination with the bSSFP sequence (scan time = 20.5 ± 2.0 seconds), and acquisitions were repeated in nine subjects. The images were quantitatively analyzed using a semi-automated software tool, and the repeatability and reproducibility of measurements were determined using regression analysis and intra-class correlation coefficient (ICC), in a blinded manner. RESULTS: The 3D bSSFP sequence provided uniform, high-quality depiction of coronary arteries (n = 20). The average visible vessel length of 100.5 ± 6.3 mm and sharpness of 55 ± 2% compared favorably with earlier reported navigator-gated bSSFP and gradient echo sequences at 3.0 T. Length measurements demonstrated a highly statistically significant degree of inter-observer (r = 0.994, ICC = 0.993), intra-observer (r = 0.894, ICC = 0.896), and inter-scan concordance (r = 0.980, ICC = 0.974). Furthermore, ICC values demonstrated excellent intra-observer, inter-observer, and inter-scan agreement for vessel diameter measurements (ICC = 0.987, 0.976, and 0.961, respectively), and vessel sharpness values (ICC = 0.989, 0.938, and 0.904, respectively). CONCLUSIONS: The 3D bSSFP acquisition, using a state-of-the-art MR scanner equipped with recently available technologies such as multi-transmit, 32-channel cardiac coil, and localized B0 and B1+ shimming, allows accelerated and reproducible multi-segment assessment of the major coronary arteries at 3.0 T in a single breathhold. This rapid sequence may be especially useful for functional imaging of the coronaries where the acquisition time is limited by the stress duration and in cases where low navigator-gating efficiency prohibits acquisition of a free breathing scan in a reasonable time period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seeing seems effortless, despite the need to segregate and integrate visual information that varies in quality, quantity, and location. The extent to which seeing passively recapitulates the external world is challenged by phenomena such as illusory contours, an example of visual completion whereby borders are perceived despite their physical absence in the image. Instead, visual completion and seeing are increasingly conceived as active processes, dependent on information exchange across neural populations. How this is instantiated in the brain remains controversial. Divergent models emanate from single-unit and population-level electrophysiology, neuroimaging, and neurostimulation studies. We reconcile discrepant findings from different methods and disciplines, and underscore the importance of taking into account spatiotemporal brain dynamics in generating models of brain function and perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä työssä raportoidaan harjoitustyön kehittäminen ja toteuttaminen Aktiivisen- ja robottinäön kurssille. Harjoitustyössä suunnitellaan ja toteutetaan järjestelmä joka liikuttaa kappaleita robottikäsivarrella kolmiuloitteisessa avaruudessa. Kappaleidenpaikkojen määrittämiseen järjestelmä käyttää digitaalisia kuvia. Tässä työssä esiteltävässä harjoitustyötoteutuksessa käytettiin raja-arvoistusta HSV-väriavaruudessa kappaleiden segmentointiin kuvasta niiden värien perusteella. Segmentoinnin tuloksena saatavaa binäärikuvaa suodatettiin mediaanisuotimella kuvan häiriöiden poistamiseksi. Kappaleen paikkabinäärikuvassa määritettiin nimeämällä yhtenäisiä pikseliryhmiä yhtenäisen alueen nimeämismenetelmällä. Kappaleen paikaksi määritettiin suurimman nimetyn pikseliryhmän paikka. Kappaleiden paikat kuvassa yhdistettiin kolmiuloitteisiin koordinaatteihin kalibroidun kameran avulla. Järjestelmä liikutti kappaleita niiden arvioitujen kolmiuloitteisten paikkojen perusteella.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis is related to the topic of image-based characterization of fibers in pulp suspension during the papermaking process. Papermaking industry is focusing on process control optimization and automatization, which makes it possible to manufacture highquality products in a resource-efficient way. Being a part of the process control, pulp suspension analysis allows to predict and modify properties of the end product. This work is a part of the tree species identification task and focuses on analysis of fiber parameters in the pulp suspension at the wet stage of paper production. The existing machine vision methods for pulp characterization were investigated, and a method exploiting direction sensitive filtering, non-maximum suppression, hysteresis thresholding, tensor voting, and curve extraction from tensor maps was developed. Application of the method to the microscopic grayscale pulp images made it possible to detect curves corresponding to fibers in the pulp image and to compute their morphological characteristics. Performance of the method was evaluated based on the manually produced ground truth data. An accuracy of fiber characteristics estimation, including length, width, and curvature, for the acacia pulp images was found to be 84, 85, and 60% correspondingly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the process of building a software for transport accessibility analysis is described. The goal was to create a software which is easy to distribute and simple to use for the user without particular background in the field of the geographical data analysis. It was shown that existing tools do not suit for this particular task due to complex interface or significant rendering time. The goal was accomplished by applying modern approaches in the process of building web applications such as maps based on vector tiles, FLUX architecture design pattern and module bundling. It was discovered that vector tiles have considerable advantages over image-based tiles such as faster rendering and real-time styling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a lattice-based visual metaphor for knowledge discovery in electronic mail. It allows a user to navigate email using a visual lattice metaphor rather than a tree structure. By using such a conceptual multi-hierarchy, the content and shape of the lattice can be varied to accommodate any number of queries against the email collection. The system provides more flexibility in retrieving stored emails and can be generalised to any electronic documents. The paper presents the underlying mathematical structures, and a number of examples of the lattice and multi-hierarchy working with a prototypical email collection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss a variety of object recognition experiments in which human subjects were presented with realistically rendered images of computer-generated three-dimensional objects, with tight control over stimulus shape, surface properties, illumination, and viewpoint, as well as subjects' prior exposure to the stimulus objects. In all experiments recognition performance was: (1) consistently viewpoint dependent; (2) only partially aided by binocular stereo and other depth information, (3) specific to viewpoints that were familiar; (4) systematically disrupted by rotation in depth more than by deforming the two-dimensional images of the stimuli. These results are consistent with recently advanced computational theories of recognition based on view interpolation.