932 resultados para Electromagnetism in medicine.
Resumo:
Growth rate of abdominal aortic aneurysm (AAA) is thought to be an important indicator of the potential risk of rupture. Wall stress is also thought to be a trigger for its rupture. However, stress change during the expansion of an AAA is unclear. Forty-four patients with AAAs were included in this longitudinal follow-up study. They were assessed by serial abdominal ultrasonography and computerized tomography (CT) scans if a critical size was reached or a rapid expansion occurred. Patient-specific 3-dimensional AAA geometries were reconstructed from the follow-up CT images. Structural analysis was performed to calculate the wall stresses of the AAA models at both baseline and final visit. A non-linear large-strain finite element method was used to compute the wall stress distribution. The average growth rate was 0.66cm/year (range 0-1.32 cm/year). A significantly positive correlation between shoulder tress at baseline and growth rate was found (r=0.342; p=0.02). A higher shoulder stress is associated with a rapidly expanding AAA. Therefore, it may be useful for estimating the growth expansion of AAAs and further risk stratification of patients with AAAs.
Resumo:
Rupture of vulnerable atheromatous plaque in the carotid and coronary arteries often leads to stroke and heart attack respectively. The mechanism of blood flow and plaque rupture in stenotic arteries is still not fully understood. A three dimensional rigid wall model was solved under steady state conditions and unsteady conditions by assuming a time-varying inlet velocity profile to investigate the relative importance of axial forces and pressure drops in arteries with asymmetric stenosis. Flow-structure interactions were investigated for the same geometry and the results were compared with those retrieved with the corresponding 2D cross-section structural models. The Navier-Stokes equations were used as the governing equations for the fluid. The tube wall was assumed hyperelastic, homogeneous, isotropic and incompressible. The analysis showed that the three dimensional behavior of velocity, pressure and wall shear stress is in general very different from that predicted by cross-section models. Pressure drop across the stenosis was found to be much higher than shear stress. Therefore, pressure may be the more important mechanical trigger for plaque rupture other than shear stress, although shear stress is closely related to plaque formation and progression.
Resumo:
Arterial compliance has been shown to correlate well with overall cardiovascular outcome and it may also be a potential risk factor for the development of atheromatous disease. This study assesses the utility of 2-D phase contrast Magnetic Resonance (MR) imaging with intra-sequence blood pressure measurement to determine carotid compliance and distensibility. 20 patients underwent 2-D phase contrast MR imaging and also ultrasound-based wall tracking measurements. Values for carotid compliance and distensibility were derived from the two different modalities and compared. Linear regression analysis was utilised to determine the extent of correlation between MR and ultrasound derived parameters. In those variables that could be directly compared, an agreement analysis was undertaken. MR measures of compliance showed a good correlation with measures based on ultrasound wall-tracking (r=0.61, 95% CI 0.34 to 0.81 p=0.0003). Vessels that had undergone carotid endarterectomy previously were significantly less compliant than either diseased or normal contralateral vessels (p=0.04). Agreement studies showed a relatively poor intra-class correlation coefficient (ICC) between diameter-based measures of compliance through either MR or ultrasound (ICC=0.14). MRI based assessment of local carotid compliance appears to be both robust and technically feasible in most subjects. Measures of compliance correlate well with ultrasound-based values and correlate best when cross-sectional area change is used rather than derived diameter changes. If validated by further larger studies, 2-D phase contrast imaging with intra-sequence blood pressure monitoring and off-line radial artery tonometry may provide a useful tool in further assessment of patients with carotid atheroma.
Resumo:
It has been well accepted that over 50% of cerebral ischemic events are the result of rupture of vulnerable carotid atheroma and subsequent thrombosis. Such strokes are potentially preventable by carotid interventions. Selection of patients for intervention is currently based on the severity of carotid luminal stenosis. It has been, however, widely accepted that luminal stenosis alone may not be an adequate predictor of risk. To evaluate the effects of degree of luminal stenosis and plaque morphology on plaque stability, we used a coupled nonlinear time-dependent model with flow-plaque interaction simulation to perform flow and stress/strain analysis for stenotic artery with a plaque. The Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian (ALE) formulation were used as the governing equations for the fluid. The Ogden strain energy function was used for both the fibrous cap and the lipid pool. The plaque Principal stresses and flow conditions were calculated for every case when varying the fibrous cap thickness from 0.1 to 2mm and the degree of luminal stenosis from 10% to 90%. Severe stenosis led to high flow velocities and high shear stresses, but a low or even negative pressure at the throat of the stenosis. Higher degree of stenosis and thinner fibrous cap led to larger plaque stresses, and a 50% decrease of fibrous cap thickness resulted in a 200% increase of maximum stress. This model suggests that fibrous cap thickness is critically related to plaque vulnerability and that, even within presence of moderate stenosis, may play an important role in the future risk stratification of those patients when identified in vivo using high resolution MR imaging.
Resumo:
Considering ultrasound propagation through complex composite media as an array of parallel sonic rays, a comparison of computer simulated prediction with experimental data has previously been reported for transmission mode (where one transducer serves as transmitter, the other as receiver) in a series of ten acrylic step-wedge samples, immersed in water, exhibiting varying degrees of transit time inhomogeneity. In this study, the same samples were used but in pulse-echo mode, where the same ultrasound transducer served as both transmitter and receiver, detecting both ‘primary’ (internal sample interface) and ‘secondary’ (external sample interface) echoes. A transit time spectrum (TTS) was derived, describing the proportion of sonic rays with a particular transit time. A computer simulation was performed to predict the transit time and amplitude of various echoes created, and compared with experimental data. Applying an amplitude-tolerance analysis, 91.7±3.7% of the simulated data was within ±1 standard deviation (STD) of the experimentally measured amplitude-time data. Correlation of predicted and experimental transit time spectra provided coefficients of determination (R2) ranging from 100.0% to 96.8% for the various samples tested. The results acquired from this study provide good evidence for the concept of parallel sonic rays. Further, deconvolution of experimental input and output signals has been shown to provide an effective method to identify echoes otherwise lost due to phase cancellation. Potential applications of pulse-echo ultrasound transit time spectroscopy (PE-UTTS) include improvement of ultrasound image fidelity by improving spatial resolution and reducing phase interference artefacts.
Resumo:
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs vs. 0.18 μs standard deviation), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.
Resumo:
A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.
Resumo:
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.
Resumo:
Objective To discuss generalized estimating equations as an extension of generalized linear models by commenting on the paper of Ziegler and Vens "Generalized Estimating Equations. Notes on the Choice of the Working Correlation Matrix". Methods Inviting an international group of experts to comment on this paper. Results Several perspectives have been taken by the discussants. Econometricians have established parallels to the generalized method of moments (GMM). Statisticians discussed model assumptions and the aspect of missing data Applied statisticians; commented on practical aspects in data analysis. Conclusions In general, careful modeling correlation is encouraged when considering estimation efficiency and other implications, and a comparison of choosing instruments in GMM and generalized estimating equations, (GEE) would be worthwhile. Some theoretical drawbacks of GEE need to be further addressed and require careful analysis of data This particularly applies to the situation when data are missing at random.
Resumo:
In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.
Resumo:
Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.
Resumo:
This study aims to help broaden the use of electronic portal imaging devices (EPIDs) for pre-treatment patient positioning verification, from photon-beam radiotherapy to photon- and electron-beam radiotherapy, by proposing and testing a method for acquiring clinicallyuseful EPID images of patient anatomy using electron beams, with a view to enabling and encouraging further research in this area. EPID images used in this study were acquired using all available beams from a linac configured to deliver electron beams with nominal energies of 6, 9, 12, 16 and 20 MeV, as well as photon beams with nominal energies of 6 and 10 MV. A widely-available heterogeneous, approximately-humanoid, thorax phantom was used, to provide an indication of the contrast and noise produced when imaging different types of tissue with comparatively realistic thicknesses. The acquired images were automatically calibrated, corrected for the effects of variations in the sensitivity of individual photodiodes, using a flood field image. For electron beam imaging, flood field EPID calibration images were acquired with and without the placement of blocks of water-equivalent plastic (with thicknesses approximately equal to the practical range of electrons in the plastic) placed upstream of the EPID, to filter out the primary electron beam, leaving only the bremsstrahlung photon signal. While the electron beam images acquired using a standard (unfiltered) flood field calibration were observed to be noisy and difficult to interpret, the electron beam images acquired using the filtered flood field calibration showed tissues and bony anatomy with levels of contrast and noise that were similar to the contrast and noise levels seen in the clinically acceptable photon beam EPID images. The best electron beam imaging results (highest contrast, signal-to-noise and contrast-to-noise ratios) were achieved when the images were acquired using the higher energy electron beams (16 and 20 MeV) when the EPID was calibrated using an intermediate (12 MeV) electron beam energy. These results demonstrate the feasibility of acquiring clinically-useful EPID images of patient anatomy using electron beams and suggest important avenues for future investigation, thus enabling and encouraging further research in this area. There is manifest potential for the EPID imaging method proposed in this work to lead to the clinical use of electron beam imaging for geometric verification of electron treatments in the future.
Resumo:
This dissertation addresses the modernization process of Finnish hospital architecture between the First and Second World War, with focus on facilities explicitly designed for women and children, which as special hospitals reflect specialization, a distinct feature of the modern era. The facilities considered in the study are the Salus hospital, Dr. Länsimäki s women s hospital, the Folkhälsan in Svenska Finland association s child-care institute, the Helsinki Women s Clinic, the Viipuri Women s Hospital, the Helsinki Children s Clinic and the Children's Castle (Lastenlinna) in Helsinki. The study considers hospital architecture as an architectural, medical and social object of design. The theoretical starting point and perspective are the views of the French philosopher and historian Michel Foucault (1925 1983) concerning the relationship of bio-power and architecture. Underlying the construction of health-care facilities for women and children were not only the desire to help but also issues of population policy, social policies, training and professionalization. In this study, hospital architecture is interpreted as reflecting developments in medicine, while also producing and reinforcing discourses associated with the ideologies of the time of design and construction. The results of the present research provide new information on the field of hospital design. The design of hospitals was no longer the sole prerogative of architects. Instead, modern hospital design involved the collaboration and networking of experts in various fields. During the period studied, the pavilion system was incorporated in hospital architecture in the block system, which was regarded as a rational. Rationalization was implemented upon the conditions of medical work. This led to spatial design in accordance with medical practices, through which norms were reinforced and created. An important aspect of the material is that the requirements of light, air, openness and hygiene created architecture in glass of an x-ray character, strongly associated with the element of discipline. The alliance of hygiene and architecture became a strategy for controlling the behaviour and encounters of people, for producing pedagogical and moral hygiene, and for reinforcing class hygiene. The modern hospital building also had to meet the requirements of aesthetic hygiene. Health-care facilities designed for women and children became production-oriented machinery, instruments for producing a healthy population and for reinforcing medical discourses.