19 resultados para minimization
em Université de Lausanne, Switzerland
Resumo:
Long-term outcomes after kidney transplantation remain suboptimal, despite the great achievements observed in recent years with the use of modern immunosuppressive drugs. Currently, the calcineurin inhibitors (CNI) cyclosporine and tacrolimus remain the cornerstones of immunosuppressive regimens in many centers worldwide, regardless of their well described side-effects, including nephrotoxicity. In this article, we review recent CNI-minimization strategies in kidney transplantation, while emphasizing on the importance of long-term follow-up and patient monitoring. Finally, accumulating data indicate that low-dose CNI-based regimens would provide an interesting balance between efficacy and toxicity.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
PURPOSE: To improve the traditional Nyquist ghost correction approach in echo planar imaging (EPI) at high fields, via schemes based on the reversal of the EPI readout gradient polarity for every other volume throughout a functional magnetic resonance imaging (fMRI) acquisition train. MATERIALS AND METHODS: An EPI sequence in which the readout gradient was inverted every other volume was implemented on two ultrahigh-field systems. Phantom images and fMRI data were acquired to evaluate ghost intensities and the presence of false-positive blood oxygenation level-dependent (BOLD) signal with and without ghost correction. Three different algorithms for ghost correction of alternating readout EPI were compared. RESULTS: Irrespective of the chosen processing approach, ghosting was significantly reduced (up to 70% lower intensity) in both rat brain images acquired on a 9.4T animal scanner and human brain images acquired at 7T, resulting in a reduction of sources of false-positive activation in fMRI data. CONCLUSION: It is concluded that at high B(0) fields, substantial gains in Nyquist ghost correction of echo planar time series are possible by alternating the readout gradient every other volume.
Resumo:
BACKGROUND: Coronary artery disease (CAD) continues to be one of the top public health burden. Perfusion cardiovascular magnetic resonance (CMR) is generally accepted to detect CAD, while data on its cost effectiveness are scarce. Therefore, the goal of the study was to compare the costs of a CMR-guided strategy vs two invasive strategies in a large CMR registry. METHODS: In 3'647 patients with suspected CAD of the EuroCMR-registry (59 centers/18 countries) costs were calculated for diagnostic examinations (CMR, X-ray coronary angiography (CXA) with/without FFR), revascularizations, and complications during a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive CXA and revascularization at the discretion of the treating physician (=CMR + CXA-strategy). In the hypothetical invasive arm, costs were calculated for an initial CXA and a FFR in vessels with ≥50 % stenoses (=CXA + FFR-strategy) and the same proportion of revascularizations and complications were applied as in the CMR + CXA-strategy. In the CXA-only strategy, costs included those for CXA and for revascularizations of all ≥50 % stenoses. To calculate the proportion of patients with ≥50 % stenoses, the stenosis-FFR relationship from the literature was used. Costs of the three strategies were determined based on a third payer perspective in 4 healthcare systems. RESULTS: Revascularizations were performed in 6.2 %, 4.5 %, and 12.9 % of all patients, patients with atypical chest pain (n = 1'786), and typical angina (n = 582), respectively; whereas complications (=all-cause death and non-fatal infarction) occurred in 1.3 %, 1.1 %, and 1.5 %, respectively. The CMR + CXA-strategy reduced costs by 14 %, 34 %, 27 %, and 24 % in the German, UK, Swiss, and US context, respectively, when compared to the CXA + FFR-strategy; and by 59 %, 52 %, 61 % and 71 %, respectively, versus the CXA-only strategy. In patients with typical angina, cost savings by CMR + CXA vs CXA + FFR were minimal in the German (2.3 %), intermediate in the US and Swiss (11.6 % and 12.8 %, respectively), and remained substantial in the UK (18.9 %) systems. Sensitivity analyses proved the robustness of results. CONCLUSIONS: A CMR + CXA-strategy for patients with suspected CAD provides substantial cost reduction compared to a hypothetical CXA + FFR-strategy in patients with low to intermediate disease prevalence. However, in the subgroup of patients with typical angina, cost savings were only minimal to moderate.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.
Resumo:
The significant development of immunosuppressive drug therapies within the past 20 years has had a major impact on the outcome of clinical solid organ transplantation, mainly by decreasing the incidence of acute rejection episodes and improving short-term patient and graft survival. However, long-term results remain relatively disappointing because of chronic allograft dysfunction and patient morbidity or mortality, which is often related to the adverse effects of immunosuppressive treatment. Thus, the induction of specific immunological tolerance of the recipient towards the allograft remains an important objective in transplantation. In this article, we first briefly describe the mechanisms of allograft rejection and immune tolerance. We then review in detail current tolerogenic strategies that could promote central or peripheral tolerance, highlighting the promises as well as the remaining challenges in clinical transplantation. The induction of haematopoietic mixed chimerism could be an approach to induce robust central tolerance, and we describe recent encouraging reports of end-stage kidney disease patients, without concomitant malignancy, who have undergone combined bone marrow and kidney transplantation. We discuss current studies suggesting that, while promoting peripheral transplantation tolerance in preclinical models, induction protocols based on lymphocyte depletion (polyclonal antithymocyte globulins, alemtuzumab) or co-stimulatory blockade (belatacept) should, at the current stage, be considered more as drug-minimization rather than tolerance-inducing strategies. Thus, a better understanding of the mechanisms that promote peripheral tolerance has led to newer approaches and the investigation of individualized donor-specific cellular therapies based on manipulated recipient regulatory T cells.
Resumo:
Tobacco-smoking prevalence has been decreasing in many high-income countries, but not in prison. We provide a summary of recent data on smoking in prison (United States, Australia, and Europe), and discuss examples of implemented policies for responding to environmental tobacco smoke (ETS), their health, humanitarian, and ethical aspects. We gathered data through a systematic literature review, and added the authors' ongoing experience in the implementation of smoking policies outside and inside prisons in Australia and Europe. Detainees' smoking prevalence varies between 64 per cent and 91.8 per cent, and can be more than three times as high as in the general population. Few data are available on the prevalence of smoking in women detainees and staff. Policies vary greatly. Bans may either be 'total' or 'partial' (smoking allowed in cells or designated places). A comprehensive policy strategy to reduce ETS needs a harm minimization philosophy, and should include environmental restrictions, information, and support to detainees and staff for smoking cessation, and health staff training in smoking cessation.
Resumo:
BACKGROUND: Enhanced recovery protocols may reduce postoperative complications and length of hospital stay. However, the implementation of these protocols requires time and financial investment. This study evaluated the cost-effectiveness of enhanced recovery implementation. METHODS: The first 50 consecutive patients treated during implementation of an enhanced recovery programme were compared with 50 consecutive patients treated in the year before its introduction. The enhanced recovery protocol principally implemented preoperative counselling, reduced preoperative fasting, preoperative carbohydrate loading, avoidance of premedication, optimized fluid balance, standardized postoperative analgesia, use of a no-drain policy, as well as early nutrition and mobilization. Length of stay, readmissions and complications within 30 days were compared. A cost-minimization analysis was performed. RESULTS: Hospital stay was significantly shorter in the enhanced recovery group: median 7 (interquartile range 5-12) versus 10 (7-18) days (P = 0·003); two patients were readmitted in each group. The rate of severe complications was lower in the enhanced recovery group (12 versus 20 per cent), but there was no difference in overall morbidity. The mean saving per patient in the enhanced recovery group was euro1651. CONCLUSION: Enhanced recovery is cost-effective, with savings evident even in the initial implementation period.
Resumo:
We propose a segmentation method based on the geometric representation of images as 2-D manifolds embedded in a higher dimensional space. The segmentation is formulated as a minimization problem, where the contours are described by a level set function and the objective functional corresponds to the surface of the image manifold. In this geometric framework, both data-fidelity and regularity terms of the segmentation are represented by a single functional that intrinsically aligns the gradients of the level set function with the gradients of the image and results in a segmentation criterion that exploits the directional information of image gradients to overcome image inhomogeneities and fragmented contours. The proposed formulation combines this robust alignment of gradients with attractive properties of previous methods developed in the same geometric framework: 1) the natural coupling of image channels proposed for anisotropic diffusion and 2) the ability of subjective surfaces to detect weak edges and close fragmented boundaries. The potential of such a geometric approach lies in the general definition of Riemannian manifolds, which naturally generalizes existing segmentation methods (the geodesic active contours, the active contours without edges, and the robust edge integrator) to higher dimensional spaces, non-flat images, and feature spaces. Our experiments show that the proposed technique improves the segmentation of multi-channel images, images subject to inhomogeneities, and images characterized by geometric structures like ridges or valleys.
Resumo:
INTRODUCTION: Eddy currents induced by switching of magnetic field gradients can lead to distortions in short echo-time spectroscopy or diffusion weighted imaging. In small bore magnets, such as human head-only systems, minimization of eddy current effects is more demanding because of the proximity of the gradient coil to conducting structures. METHODS: In the present study, the eddy current behavior achievable on a recently installed 7 tesla-68 cm bore head-only magnet was characterized. RESULTS: Residual effects after compensation were shown to be on the same order of magnitude as those measured on two whole body systems (3 and 4.7 T), while using two to three fold increased gradient slewrates.
Resumo:
The drug discovery process has been deeply transformed recently by the use of computational ligand-based or structure-based methods, helping the lead compounds identification and optimization, and finally the delivery of new drug candidates more quickly and at lower cost. Structure-based computational methods for drug discovery mainly involve ligand-protein docking and rapid binding free energy estimation, both of which require force field parameterization for many drug candidates. Here, we present a fast force field generation tool, called SwissParam, able to generate, for arbitrary small organic molecule, topologies, and parameters based on the Merck molecular force field, but in a functional form that is compatible with the CHARMM force field. Output files can be used with CHARMM or GROMACS. The topologies and parameters generated by SwissParam are used by the docking software EADock2 and EADock DSS to describe the small molecules to be docked, whereas the protein is described by the CHARMM force field, and allow them to reach success rates ranging from 56 to 78%. We have also developed a rapid binding free energy estimation approach, using SwissParam for ligands and CHARMM22/27 for proteins, which requires only a short minimization to reproduce the experimental binding free energy of 214 ligand-protein complexes involving 62 different proteins, with a standard error of 2.0 kcal mol(-1), and a correlation coefficient of 0.74. Together, these results demonstrate the relevance of using SwissParam topologies and parameters to describe small organic molecules in computer-aided drug design applications, together with a CHARMM22/27 description of the target protein. SwissParam is available free of charge for academic users at www.swissparam.ch.
Resumo:
Tractography algorithms provide us with the ability to non-invasively reconstruct fiber pathways in the white matter (WM) by exploiting the directional information described with diffusion magnetic resonance. These methods could be divided into two major classes, local and global. Local methods reconstruct each fiber tract iteratively by considering only directional information at the voxel level and its neighborhood. Global methods, on the other hand, reconstruct all the fiber tracts of the whole brain simultaneously by solving a global energy minimization problem. The latter have shown improvements compared to previous techniques but these algorithms still suffer from an important shortcoming that is crucial in the context of brain connectivity analyses. As no anatomical priors are usually considered during the reconstruction process, the recovered fiber tracts are not guaranteed to connect cortical regions and, as a matter of fact, most of them stop prematurely in the WM; this violates important properties of neural connections, which are known to originate in the gray matter (GM) and develop in the WM. Hence, this shortcoming poses serious limitations for the use of these techniques for the assessment of the structural connectivity between brain regions and, de facto, it can potentially bias any subsequent analysis. Moreover, the estimated tracts are not quantitative, every fiber contributes with the same weight toward the predicted diffusion signal. In this work, we propose a novel approach for global tractography that is specifically designed for connectivity analysis applications which: (i) explicitly enforces anatomical priors of the tracts in the optimization and (ii) considers the effective contribution of each of them, i.e., volume, to the acquired diffusion magnetic resonance imaging (MRI) image. We evaluated our approach on both a realistic diffusion MRI phantom and in vivo data, and also compared its performance to existing tractography algorithms.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to non-invasively probe the structure of the white matter. Despite the potential of the technique, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a wide variety of methods have been proposed to shorten acquisition times. [...] We here review a recent work where we propose to further exploit the versatility of compressed sensing and convex optimization with the aim to characterize the fiber orientation distribution sparsity more optimally. We re-formulate the spherical deconvolution problem as a constrained l0 minimization.
Resumo:
BACKGROUND: Deep burn assessment made by clinical evaluation has an accuracy varying between 60% and 80% and will determine if a burn injury will need tangential excision and skin grafting or if it will be able to heal spontaneously. Laser Doppler Imaging (LDI) techniques allow an improved burn depth assessment but their use is limited by the time-consuming image acquisition which may take up to 6 min per image. METHODS: To evaluate the effectiveness and reliability of a newly developed full-field LDI technology, 15 consecutive patients presenting with intermediate depth burns were assessed both clinically and by FluxExplorer LDI technology. Comparison between the two methods of assessment was carried out. RESULTS: Image acquisition was done within 6 s. FluxEXPLORER LDI technology achieved a significantly improved accuracy of burn depth assessment compared to the clinical judgement performed by board certified plastic and reconstructive surgeons (P < 0.05, 93% of correctly assessed burns injuries vs. 80% for clinical assessment). CONCLUSION: Technological improvements of LDI technology leading to a decreased image acquisition time and reliable burn depth assessment allow the routine use of such devices in the acute setting of burn care without interfering with the patient's treatment. Rapid and reliable LDI technology may assist clinicians in burn depth assessment and may limit the morbidity of burn patients through a minimization of the area of surgical debridement. Future technological improvements allowing the miniaturization of the device will further ease its clinical application.