943 resultados para Predictor-corrector primal-dual nonlinear rescaling method
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
The heart is a wonderful but complex organ: it uses electrochemical mechanisms in order to produce mechanical energy to pump the blood throughout the body and allow the life of humans and animals. This organ can be subject to several diseases and sudden cardiac death (SCD) is the most catastrophic manifestation of these diseases, responsible for the death of a large number of people throughout the world. It is estimated that 325000 Americans annually die for SCD. SCD most commonly occurs as a result of reentrant tachyarrhythmias (ventricular tachycardia (VT) and ventricular fibrillation (VF)) and the identification of those patients at higher risk for the development of SCD has been a difficult clinical challenge. Nowadays, a particular electrocardiogram (ECG) abnormality, “T-wave alternans” (TWA), is considered a precursor of lethal cardiac arrhythmias and sudden death, a sensitive indicator of risk for SCD. TWA is defined as a beat-to-beat alternation in the shape, amplitude, or timing of the T-wave on the ECG, indicative of the underlying repolarization of cardiac cells [5]. In other words TWA is the macroscopic effect of subcellular and celluar mechanisms involving ionic kinetics and the consequent depolarization and repolarization of the myocytes. Experimental activities have shown that TWA on the ECG is a manifestation of an underlying alternation of long and short action potential durations (APDs), the so called APD-alternans, of cardiac myocytes in the myocardium. Understanding the mechanism of APDs-alternans is the first step for preventing them to occur. In order to investigate these mechanisms it’s very important to understand that the biological systems are complex systems and their macroscopic properties arise from the nonlinear interactions among the parts. The whole is greater than the sum of the parts, and it cannot be understood only by studying the single parts. In this sense the heart is a complex nonlinear system and its way of working follows nonlinear dynamics; alternans also, they are a manifestation of a phenomenon typical in nonlinear dynamical systems, called “period-dubling bifurcation”. Over the past decade, it has been demonstrated that electrical alternans in cardiac tissue is an important marker for the development of ventricular fibrillation and a significant predictor for mortality. It has been observed that acute exposure to low concentration of calcium does not decrease the magnitude of alternans and sustained ventricular Fibrillation (VF) is still easily induced under these condition. However with prolonged exposure to low concentration of calcium, alternans disappears, but VF is still inducible. This work is based on this observation and tries to make it clearer. The aim of this thesis is investigate the effect of hypocalcemia spatial alternans and VF doing experiments with canine hearts and perfusing them with a solution with physiological ionic concentration and with a solution with low calcium concentration (hypocalcemia); in order to investigate the so called memory effect, the experimental activity was modified during the way. The experiments were performed with the optical mapping technique, using voltage-sensitive dye, and a custom made Java code was used in post-processing. Finding the Nolasco and Dahlen’s criterion [8] inadequate for the prediction of alternans, and takin into account the experimental results, another criterion, which consider the memory effect, has been implemented. The implementation of this criterion could be the first step in the creation of a method, AP-based, discriminating who is at risk if developing VF. This work is divided into four chapters: the first is a brief presentation of the physiology of the heart; the second is a review of the major theories and discovers in the study of cardiac dynamics; the third chapter presents an overview on the experimental activity and the optical mapping technique; the forth chapter contains the presentation of the results and the conclusions.
Resumo:
In this work we study a polyenergetic and multimaterial model for the breast image reconstruction in Digital Tomosynthesis, taking into consideration the variety of the materials forming the object and the polyenergetic nature of the X-rays beam. The modelling of the problem leads to the resolution of a high-dimensional nonlinear least-squares problem that, due to its nature of inverse ill-posed problem, needs some kind of regularization. We test two main classes of methods: the Levenberg-Marquardt method (together with the Conjugate Gradient method for the computation of the descent direction) and two limited-memory BFGS-like methods (L-BFGS). We perform some experiments for different values of the regularization parameter (constant or varying at each iteration), tolerances and stop conditions. Finally, we analyse the performance of the several methods comparing relative errors, iterations number, times and the qualities of the reconstructed images.
Resumo:
Eye injuries are a large societal problem in both the military and civilian sectors. Eye injury rates are increasing in recent military conflicts, and there are over 1.9 million eye injuries in the United States civilian sector annually. In order to develop a better understanding of eye injury risk, several previous studies have developed eye injury criteria based on projectile characteristics. While these injury criteria have been used to estimate eye injury potential of impact scenarios, they require that the mass, size and velocity of the projectile are known. It is desirable to develop a method to assess the severity of an eye impact in environments where it would be difficult or impossible to determine these projectile characteristics. The current study presents a measurement technique for monitoring intraocular pressure of the eye under impactloading. Through experimental tests with a custom pressure chamber, a subminiature pressure transducer was validated to be thermally stable and suitable for testing in an impact environment.Once validated, the transducer was utilized intraocularly, inserted through the optic nerve, to measure the pressure of the eye during blunt-projectile impacts. A total of 150 impact tests were performed using projectiles ranging from 3.2 mm to 17.5 mm in diameter. Investigation of the relationship between projectile energy and intraocular pressure lead to the identification of at least two distinct trends. Intraocular pressure and normalized energy measurements indicated a different response for penetrating-type globe rupture injuries with smaller diameter (d < 1 cm)projectiles, and blunt-type globe rupture injuries with larger diameter (d > 1 cm) projectiles. Furthermore, regression analysis indicates that relationships exist between intraocular pressureand projectile energy that may allow quantification of eye injury risk based on pressure data, and also that intraocular pressure measurements of impact may lead to a better understanding of thetransition between penetrating and blunt globe rupture injury mechanisms.
Resumo:
The purpose of this research project is to study an innovative method for the stability assessment of structural steel systems, namely the Modified Direct Analysis Method (MDM). This method is intended to simplify an existing design method, the Direct Analysis Method (DM), by assuming a sophisticated second-order elastic structural analysis will be employed that can account for member and system instability, and thereby allow the design process to be reduced to confirming the capacity of member cross-sections. This last check can be easily completed by substituting an effective length of KL = 0 into existing member design equations. This simplification will be particularly useful for structural systems in which it is not clear how to define the member slenderness L/r when the laterally unbraced length L is not apparent, such as arches and the compression chord of an unbraced truss. To study the feasibility and accuracy of this new method, a set of 12 benchmark steel structural systems previously designed and analyzed by former Bucknell graduate student Jose Martinez-Garcia and a single column were modeled and analyzed using the nonlinear structural analysis software MASTAN2. A series of Matlab-based programs were prepared by the author to provide the code checking requirements for investigating the MDM. By comparing MDM and DM results against the more advanced distributed plasticity analysis results, it is concluded that the stability of structural systems can be adequately assessed in most cases using MDM, and that MDM often appears to be a more accurate but less conservative method in assessing stability.
Resumo:
Dental identification is the most valuable method to identify human remains in single cases with major postmortem alterations as well as in mass casualties because of its practicability and demanding reliability. Computed tomography (CT) has been investigated as a supportive tool for forensic identification and has proven to be valuable. It can also scan the dentition of a deceased within minutes. In the present study, we investigated currently used restorative materials using ultra-high-resolution dual-source CT and the extended CT scale for the purpose of a color-encoded, in scale, and artifact-free visualization in 3D volume rendering. In 122 human molars, 220 cavities with 2-, 3-, 4- and 5-mm diameter were prepared. With presently used filling materials (different composites, temporary filling materials, ceramic, and liner), these cavities were restored in six teeth for each material and cavity size (exception amalgam n = 1). The teeth were CT scanned and images reconstructed using an extended CT scale. Filling materials were analyzed in terms of resulting Hounsfield units (HU) and filling size representation within the images. Varying restorative materials showed distinctively differing radiopacities allowing for CT-data-based discrimination. Particularly, ceramic and composite fillings could be differentiated. The HU values were used to generate an updated volume-rendering preset for postmortem extended CT scale data of the dentition to easily visualize the position of restorations, the shape (in scale), and the material used which is color encoded in 3D. The results provide the scientific background for the application of 3D volume rendering to visualize the human dentition for forensic identification purposes.
Resumo:
For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.
Resumo:
This study develops an automated analysis tool by combining total internal reflection fluorescence microscopy (TIRFM), an evanescent wave microscopic imaging technique to capture time-sequential images and the corresponding image processing Matlab code to identify movements of single individual particles. The developed code will enable us to examine two dimensional hindered tangential Brownian motion of nanoparticles with a sub-pixel resolution (nanoscale). The measured mean square displacements of nanoparticles are compared with theoretical predictions to estimate particle diameters and fluid viscosity using a nonlinear regression technique. These estimated values will be confirmed by the diameters and viscosities given by manufacturers to validate this analysis tool. Nano-particles used in these experiments are yellow-green polystyrene fluorescent nanospheres (200 nm, 500 nm and 1000 nm in diameter (nominal); 505 nm excitation and 515 nm emission wavelengths). Solutions used in this experiment are de-ionized (DI) water, 10% d-glucose and 10% glycerol. Mean square displacements obtained near the surface shows significant deviation from theoretical predictions which are attributed to DLVO forces in the region but it conforms to theoretical predictions after ~125 nm onwards. The proposed automation analysis tool will be powerfully employed in the bio-application fields needed for examination of single protein (DNA and/or vesicle) tracking, drug delivery, and cyto-toxicity unlike the traditional measurement techniques that require fixing the cells. Furthermore, this tool can be also usefully applied for the microfluidic areas of non-invasive thermometry, particle tracking velocimetry (PTV), and non-invasive viscometry.
Resumo:
When a single brush-less dc motor is fed by an inverter with a sensor-less algorithm embedded in the switching controller, the system exhibits a linear and stable output in terms of the speed and torque. However, with two motors modulated by the same inverter, the system is unstable and rendered useless for a steady application, unless provided with some resistive damping on the supply lines. The project discusses and analysis the stability of such a system through simulations and hardware demonstrations and also will discuss a method to derive the values of these damping.
Resumo:
“Dual contouring” approaches provide an alternative to standard Marching Cubes (MC) method to extract and approximate an isosurface from trivariate data given on a volumetric mesh. These dual approaches solve some of the problems encountered by the MC methods. We present a simple method based on the MC method and the ray intersection technique to compute isosurface points in the cell interior. One of the advantages of our method is that it does not require us to use Hermite interpolation scheme, unlike other dual contouring methods. We perform a complete analysis of all possible configurations to generate a look-up table for all configurations. We use the look-up table to optimize the ray-intersection method to obtain minimum number of points necessarily sufficient for defining topologically correct isosurfaces in all possible configurations. Isosurface points are connected using a simple strategy.
Resumo:
The aim of this study was to determine the influence of individual factors on differences in bone mineral density (BMD) using dual X-ray absorptiometry pencil beam (PB) and fan beam (FB) modes in vivo and in vitro. PB.BMD and FB.BMD of 63 normal Caucasian females ages 21-80 yr were measured at the lumbar spine and hip. Residuals of the FB/PB regression were used to assess the impact of height, weight, adiposity index (AI) (= weight/height(3/2)), back tissue thickness, and PB.BMD, respectively, on FB/PB difference. The Hologic Anthropomorphic Spine Phantom (ASP) was measured using the PB and FB modes at two different levels to assess the impact of scanning mode and focus distance. The European Spine Phantom (ESP) prototype, a geometrically well-defined phantom with known vertebral densities, was measured using PB and FB modes and analyzed manually to determine the impact of bone density on FB/PB difference and automatically to determine the impact of edge detection on FB/PB difference. Population BMD results were perfectly correlated, but significantly overestimated by 1.5% at the lumbar spine and underestimated by 0.7% at the neck, 1.8% at the trochanter, and 2.0% at the total hip, respectively, when using the FB compared with PB mode. At the lumbar spine, the FB/PB residual correlated negatively with height (r = 0.34, p < 0.01) and PB.BMD (r = 0.48, p <: 0. 0001) and positively with AI (r = 0.26, p < 0.05). At the hip, residual of trochanter correlated positively with weight (r = 0.36, p < 0.01) and AI (r = 0.36, p < 0.01). The FB mode significantly increased ASP BMD by 0.7% compared with PB. Using the FB mode, increasing focus distance significantly (p < 0.001) decreased area and bone mineral content, but not BMD. By contrast, increasing focus distance significantly decreased PB.BMD by 0.7%. With the ESP, the PB mode supplied accurate projected are of the bone (AREA) results but significant underestimation of specified BMD in the manual analysis. The FB mode significantly underestimated PB. AREA by 2.9% but fitted specified BMD quite well. FB/PB overestimation was larger for the low-density (+8.7%) than for the high-density vertebra (+4. 9%). The automated analysis resulted in more than 14% underestimation of PB. AREA (low-density vertebra) and an almost 13% overestimation of PB.BMD (high-density vertebra) using FB. In conclusion, FB and PB measurements are highly correlated at the lumbar spine and hip with small but significant BMD differences related to height, adiposity, and BMD. In clinical practice, it can be erroneous to switch from one method to another, especially in women with low bone density.
Resumo:
Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.
Resumo:
PURPOSE: To develop and implement a method for improved cerebellar tissue classification on the MRI of brain by automatically isolating the cerebellum prior to segmentation. MATERIALS AND METHODS: Dual fast spin echo (FSE) and fluid attenuation inversion recovery (FLAIR) images were acquired on 18 normal volunteers on a 3 T Philips scanner. The cerebellum was isolated from the rest of the brain using a symmetric inverse consistent nonlinear registration of individual brain with the parcellated template. The cerebellum was then separated by masking the anatomical image with individual FLAIR images. Tissues in both the cerebellum and rest of the brain were separately classified using hidden Markov random field (HMRF), a parametric method, and then combined to obtain tissue classification of the whole brain. The proposed method for tissue classification on real MR brain images was evaluated subjectively by two experts. The segmentation results on Brainweb images with varying noise and intensity nonuniformity levels were quantitatively compared with the ground truth by computing the Dice similarity indices. RESULTS: The proposed method significantly improved the cerebellar tissue classification on all normal volunteers included in this study without compromising the classification in remaining part of the brain. The average similarity indices for gray matter (GM) and white matter (WM) in the cerebellum are 89.81 (+/-2.34) and 93.04 (+/-2.41), demonstrating excellent performance of the proposed methodology. CONCLUSION: The proposed method significantly improved tissue classification in the cerebellum. The GM was overestimated when segmentation was performed on the whole brain as a single object.
Resumo:
A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.
Resumo:
Proton radiation therapy is gaining popularity because of the unique characteristics of its dose distribution, e.g., high dose-gradient at the distal end of the percentage-depth-dose curve (known as the Bragg peak). The high dose-gradient offers the possibility of delivering high dose to the target while still sparing critical organs distal to the target. However, the high dose-gradient is a double-edged sword: a small shift of the highly conformal high-dose area can cause the target to be substantially under-dosed or the critical organs to be substantially over-dosed. Because of that, large margins are required in treatment planning to ensure adequate dose coverage of the target, which prevents us from realizing the full potential of proton beams. Therefore, it is critical to reduce uncertainties in the proton radiation therapy. One major uncertainty in a proton treatment is the range uncertainty related to the estimation of proton stopping power ratio (SPR) distribution inside a patient. The SPR distribution inside a patient is required to account for tissue heterogeneities when calculating dose distribution inside the patient. In current clinical practice, the SPR distribution inside a patient is estimated from the patient’s treatment planning computed tomography (CT) images based on the CT number-to-SPR calibration curve. The SPR derived from a single CT number carries large uncertainties in the presence of human tissue composition variations, which is the major drawback of the current SPR estimation method. We propose to solve this problem by using dual energy CT (DECT) and hypothesize that the range uncertainty can be reduced by a factor of two from currently used value of 3.5%. A MATLAB program was developed to calculate the electron density ratio (EDR) and effective atomic number (EAN) from two CT measurements of the same object. An empirical relationship was discovered between mean excitation energies and EANs existing in human body tissues. With the MATLAB program and the empirical relationship, a DECT-based method was successfully developed to derive SPRs for human body tissues (the DECT method). The DECT method is more robust against the uncertainties in human tissues compositions than the current single-CT-based method, because the DECT method incorporated both density and elemental composition information in the SPR estimation. Furthermore, we studied practical limitations of the DECT method. We found that the accuracy of the DECT method using conventional kV-kV x-ray pair is susceptible to CT number variations, which compromises the theoretical advantage of the DECT method. Our solution to this problem is to use a different x-ray pair for the DECT. The accuracy of the DECT method using different combinations of x-ray energies, i.e., the kV-kV, kV-MV and MV-MV pair, was compared using the measured imaging uncertainties for each case. The kV-MV DECT was found to be the most robust against CT number variations. In addition, we studied how uncertainties propagate through the DECT calculation, and found general principles of selecting x-ray pairs for the DECT method to minimize its sensitivity to CT number variations. The uncertainties in SPRs estimated using the kV-MV DECT were analyzed further and compared to those using the stoichiometric method. The uncertainties in SPR estimation can be divided into five categories according to their origins: the inherent uncertainty, the DECT modeling uncertainty, the CT imaging uncertainty, the uncertainty in the mean excitation energy, and SPR variation with proton energy. Additionally, human body tissues were divided into three tissue groups – low density (lung) tissues, soft tissues and bone tissues. The uncertainties were estimated separately because their uncertainties were different under each condition. An estimate of the composite range uncertainty (2s) was determined for three tumor sites – prostate, lung, and head-and-neck, by combining the uncertainty estimates of all three tissue groups, weighted by their proportions along typical beam path for each treatment site. In conclusion, the DECT method holds theoretical advantages in estimating SPRs for human tissues over the current single-CT-based method. Using existing imaging techniques, the kV-MV DECT approach was capable of reducing the range uncertainty from the currently used value of 3.5% to 1.9%-2.3%, but it is short to reach our original goal of reducing the range uncertainty by a factor of two. The dominant source of uncertainties in the kV-MV DECT was the uncertainties in CT imaging, especially in MV CT imaging. Further reduction in beam hardening effect, the impact of scatter, out-of-field object etc. would reduce the Hounsfeld Unit variations in CT imaging. The kV-MV DECT still has the potential to reduce the range uncertainty further.