68 resultados para Vision-based row tracking algorithm
Resumo:
PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems. METHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts. RESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed. CONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.
Resumo:
BACKGROUND: This study describes the prevalence, associated anomalies, and demographic characteristics of cases of multiple congenital anomalies (MCA) in 19 population-based European registries (EUROCAT) covering 959,446 births in 2004 and 2010. METHODS: EUROCAT implemented a computer algorithm for classification of congenital anomaly cases followed by manual review of potential MCA cases by geneticists. MCA cases are defined as cases with two or more major anomalies of different organ systems, excluding sequences, chromosomal and monogenic syndromes. RESULTS: The combination of an epidemiological and clinical approach for classification of cases has improved the quality and accuracy of the MCA data. Total prevalence of MCA cases was 15.8 per 10,000 births. Fetal deaths and termination of pregnancy were significantly more frequent in MCA cases compared with isolated cases (p < 0.001) and MCA cases were more frequently prenatally diagnosed (p < 0.001). Live born infants with MCA were more often born preterm (p < 0.01) and with birth weight < 2500 grams (p < 0.01). Respiratory and ear, face, and neck anomalies were the most likely to occur with other anomalies (34% and 32%) and congenital heart defects and limb anomalies were the least likely to occur with other anomalies (13%) (p < 0.01). However, due to their high prevalence, congenital heart defects were present in half of all MCA cases. Among males with MCA, the frequency of genital anomalies was significantly greater than the frequency of genital anomalies among females with MCA (p < 0.001). CONCLUSION: Although rare, MCA cases are an important public health issue, because of their severity. The EUROCAT database of MCA cases will allow future investigation on the epidemiology of these conditions and related clinical and diagnostic problems.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
Intracardiac organization indices such as atrial fibril- lation (AF) cycle length (AFCL) have been used to track the efficiency of stepwise catheter ablation (step-CA) of long-standing persistent AF (pers-AF), however, with lim- ited success. The timing between nearby bipolar intracar- diac electrograms (EGMs) reflects the spatial dynamics of wavelets during AF. The extent of synchronization between EGMs is an indirect measure of AF spatial organization. The synchronization between nearby EGMs during step- CA of pers-AF was evaluated using new indices based on the cross-correlation. The first one (spar(W)) quantifies the sparseness of the cross-correlation of local activation times. The second one (OI(W)) reflects the local concen- tration around the largest peak of the cross-correlation. By computing their relative evolution during step-CA until AF termination (AF-term), we found that OI(W) appeared su- perior to AFCL and spar(W) to track the effect of step-CA "en route" to AF-term.
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
INTRODUCTION. The role of turbine-based NIV ventilators (TBV) versus ICU ventilators with NIV mode activated (ICUV) to deliver NIV in case of severe respiratory failure remains debated. OBJECTIVES. To compare the response time and pressurization capacity of TBV and ICUV during simulated NIV with normal and increased respiratory demand, in condition of normal and obstructive respiratory mechanics. METHODS. In a two-chamber lung model, a ventilator simulated normal (P0.1 = 2 mbar, respiratory rate RR = 15/min) or increased (P0.1 = 6 mbar, RR = 25/min) respiratory demand. NIV was simulated by connecting the lung model (compliance 100 ml/mbar; resistance 5 or 20 l/mbar) to a dummy head equipped with a naso-buccal mask. Connections allowed intentional leaks (29 ± 5 % of insufflated volume). Ventilators to test: Servo-i (Maquet), V60 and Vision (Philips Respironics) were connected via a standard circuit to the mask. Applied pressure support levels (PSL) were 7 mbar for normal and 14 mbar for increased demand. Airway pressure and flow were measured in the ventilator circuit and in the simulated airway. Ventilator performance was assessed by determining trigger delay (Td, ms), pressure time product at 300 ms (PTP300, mbar s) and inspiratory tidal volume (VT, ml) and compared by three-way ANOVA for the effect of inspiratory effort, resistance and the ventilator. Differences between ventilators for each condition were tested by oneway ANOVA and contrast (JMP 8.0.1, p\0.05). RESULTS. Inspiratory demand and resistance had a significant effect throughout all comparisons. Ventilator data figure in Table 1 (normal demand) and 2 (increased demand): (a) different from Servo-i, (b) different from V60.CONCLUSION. In this NIV bench study, with leaks, trigger delay was shorter for TBV with normal respiratory demand. By contrast, it was shorter for ICUV when respiratory demand was high. ICUV afforded better pressurization (PTP 300) with increased demand and PSL, particularly with increased resistance. TBV provided a higher inspiratory VT (i.e., downstream from the leaks) with normal demand, and a significantly (although minimally) lower VT with increased demand and PSL.
A New Method for ECG Tracking of Persistent Atrial Fibrillation Termination during Stepwise Ablation
Resumo:
Stepwise radiofrequency catheter ablation (step-CA) has become the treatment of choice for the restoration of sinus rhythm (SR) in patients with long-standing persistent atrial fibrillation (pers-AF). Its success rate appears limited as the amount of ablation to achieve long term SR is unknown. Multiple organization indexes (OIs) have been previously developed to track the organization of AF during step-CA, however, with limited success. We report an adaptive method for tracking AF termination (AF-term) based on OIs characterizing the relationship between harmonic components of atrial activity from the surface ECG of AF activity. By computing their relative evolution during the last two steps preceding AF-term, we found that the performance of our OIs was superior to classical indices to track the efficiency of step-CA "en route" to AF-term. Our preliminary results suggest that the gradual synchronization between the fundamental and its first harmonic of AF activity appears as a promising parameter for predicting AF-term during step-CA.
Resumo:
RESUME Les améliorations méthodologiques des dernières décennies ont permis une meilleure compréhension de la motilité gastro-intestinale. Il manque toutefois une méthode qui permette de suivre la progression du chyme le long du tube gastro-intestinal. Pour permettre l'étude de la motilité de tout le tractus digestif humain, une nouvelle technique, peu invasive, a été élaborée au Département de Physiologie, en collaboration avec l'EPFL. Appelée "Magnet Tracking", la technique est basée sur la détection du champ magnétique généré par des matériaux ferromagnétiques avalés. A cet usage, une pilule magnétique, une matrice de capteurs et un logiciel ont été développés. L'objet de ce travail est de démontrer la faisabilité d'un examen de la motilité gastro-intestinale chez l'Homme par cette méthode. L'aimant est un cylindre (ø 6x7 mm, 0.2 cm3) protégé par une gaine de silicone. Le système de mesure est constitué d'une matrice de 4x4 capteurs et d'un ordinateur portable. Les capteurs fonctionnent sur l'effet Hall. Grâce à l'interface informatique, l'évolution de la position de l'aimant est suivie en temps réel à travers tout le tractus digestif. Sa position est exprimée en fonction du temps ou reproduite en 3-D sous forme d'une trajectoire. Différents programmes ont été crées pour analyser la dynamique des mouvements de l'aimant et caractériser la motilité digestive. Dix jeunes volontaires en bonne santé ont participé à l'étude. L'aimant a été avalé après une nuit de jeûne et son séjour intra digestif suivi pendant 2 jours consécutifs. Le temps moyen de mesure était de 34 heures. Chaque sujet a été examiné une fois sauf un qui a répété sept fois l'expérience. Les sujets restaient en décubitus dorsal, tranquilles et pouvaient interrompre la mesure s'ils le désiraient. Ils sont restés à jeûne le premier jour. L'évacuation de l'aimant a été contrôlée chez tous les sujets. Tous les sujets ont bien supporté l'examen. Le marqueur a pu être détecté de l'oesophage au rectum. La trajectoire ainsi constituée représente une conformation de l'anatomie digestive : une bonne superposition de celle-ci à l'anatomie est obtenue à partir des images de radiologie conventionnelle (CT-scan, lavement à la gastrografine). Les mouvements de l'aimant ont été caractérisés selon leur périodicité, leur amplitude ou leur vitesse pour chaque segment du tractus digestif. Ces informations physiologiques sont bien corrélées à celles obtenues par des méthodes établies d'étude de la motilité gastro-intestinale. Ce travail démontre la faisabilité d'un examen de la motilité gastro-intestinal chez l'Homme par la méthode de Magnet Tracking. La technique fournit les données anatomiques et permet d'analyser en temps réel la dynamique des mouvements du tube digestif. Cette méthode peu invasive ouvre d'intéressantes perspectives pour l'étude de motilité dans des conditions physiologiques et pathologiques. Des expériences visant à valider cette approche en tant que méthode clinique sont en voie de réalisation dans plusieurs centres en Suisse et à l'étranger. SUMMARY Methodological improvements realised over the last decades have permitted a better understanding of gastrointestinal motility. Nevertheless, a method allowing a continuous following of lumina' contents is still lacking. In order to study the human digestive tract motility, a new minimally invasive technique was developed at the Department of Physiology in collaboration with Swiss Federal Institute of Technology. The method is based on the detection of magnetic field generated by swallowed ferromagnetic materials. The aim of our work was to demonstrate the feasibility of this new approach to study the human gastrointestinal motility. The magnet used was a cylinder (ø6x7mm, 0.2 cm3) coated with silicon. The magnet tracking system consisted of a 4x4 matrix of sensors based on the Hall effect Signals from the sensors were digitised and sent to a laptop computer for processing and storage. Specific software was conceived to analyse in real time the progression of the magnet through the gastrointestinal tube. Ten young and healthy volunteers were enrolled in the study. After a fasting period of 12 hours, they swallowed the magnet. The pill was then tracked for two consecutive days for 34 hours on average. Each subject was studied once except one who was studied seven times. Every subject laid on his back for the entire experiment but could interrupt it at anytime. Evacuation of the magnet was controlled in all subjects. The examination was well tolerated. The pill could be followed from the esophagus to the rectum. The trajectory of the magnet represented a "mould" of the anatomy of the digestive tube: a good superimposition with radiological anatomy (gastrografin contrast and CT) was obtained. Movements of the magnet were characterized by periodicity, velocity, and amplitude of displacements for every segment of the digestive tract. The physiological information corresponded well to data from current methods of studying gastrointestinal motility. This work demonstrates the feasibility of the new approach in studies of human gastrointestinal motility. The technique allows to correlate in real time the dynamics of digestive movements with the anatomical data. This minimally invasive method is ready for studies of human gastrointestinal motility under physiological as well as pathological conditions. Studies aiming at validation of this new approach as a clinically relevant tool are being realised in several centres in Switzerland and abroad. Abstract: A new minimally invasive technique allowing for anatomical mapping and motility studies along the entire human digestive system is presented. The technique is based on continuous tracking of a small magnet progressing through the digestive tract. The coordinates of the magnet are calculated from signals recorded by 16 magnetic field sensors located over the abdomen. The magnet position, orientation and trajectory are displayed in real time. Ten young healthy volunteers were followed during 34 h. The technique was well tolerated and no complication was encountered, The information obtained was 3-D con-figuration of the digestive tract and dynamics of the magnet displacement (velocity, transit time, length estimation, rhythms). In the same individual, repea-ted examination gave very reproducible results. The anatomical and physiological information obtained corresponded well to data from current methods and imaging. This simple, minimally invasive technique permits examination of the entire digestive tract and is suitable for both research and clinical studies. In combination with other methods, it may represent a useful tool for studies of Cl motility with respect to normal and pathological conditions.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
Resumo:
High-resolution tomographic imaging of the shallow subsurface is becoming increasingly important for a wide range of environmental, hydrological and engineering applications. Because of their superior resolution power, their sensitivity to pertinent petrophysical parameters, and their far reaching complementarities, both seismic and georadar crosshole imaging are of particular importance. To date, corresponding approaches have largely relied on asymptotic, ray-based approaches, which only account for a very small part of the observed wavefields, inherently suffer from a limited resolution, and in complex environments may prove to be inadequate. These problems can potentially be alleviated through waveform inversion. We have developed an acoustic waveform inversion approach for crosshole seismic data whose kernel is based on a finite-difference time-domain (FDTD) solution of the 2-D acoustic wave equations. This algorithm is tested on and applied to synthetic data from seismic velocity models of increasing complexity and realism and the results are compared to those obtained using state-of-the-art ray-based traveltime tomography. Regardless of the heterogeneity of the underlying models, the waveform inversion approach has the potential of reliably resolving both the geometry and the acoustic properties of features of the size of less than half a dominant wavelength. Our results do, however, also indicate that, within their inherent resolution limits, ray-based approaches provide an effective and efficient means to obtain satisfactory tomographic reconstructions of the seismic velocity structure in the presence of mild to moderate heterogeneity and in absence of strong scattering. Conversely, the excess effort of waveform inversion provides the greatest benefits for the most heterogeneous, and arguably most realistic, environments where multiple scattering effects tend to be prevalent and ray-based methods lose most of their effectiveness.
Resumo:
Diagnosis of several neurological disorders is based on the detection of typical pathological patterns in the electroencephalogram (EEG). This is a time-consuming task requiring significant training and experience. Automatic detection of these EEG patterns would greatly assist in quantitative analysis and interpretation. We present a method, which allows automatic detection of epileptiform events and discrimination of them from eye blinks, and is based on features derived using a novel application of independent component analysis. The algorithm was trained and cross validated using seven EEGs with epileptiform activity. For epileptiform events with compensation for eyeblinks, the sensitivity was 65 +/- 22% at a specificity of 86 +/- 7% (mean +/- SD). With feature extraction by PCA or classification of raw data, specificity reduced to 76 and 74%, respectively, for the same sensitivity. On exactly the same data, the commercially available software Reveal had a maximum sensitivity of 30% and concurrent specificity of 77%. Our algorithm performed well at detecting epileptiform events in this preliminary test and offers a flexible tool that is intended to be generalized to the simultaneous classification of many waveforms in the EEG.