984 resultados para Trans-dimensional simulate annealing.
Resumo:
The trans-apical aortic valve implantation (TA-AVI) is an established technique for high-risk patients requiring aortic valve replacement. Traditionally, preoperative (computed tomography (CT) scan, coronary angiogram) and intra-operative imaging (fluoroscopy) for stent-valve positioning and implantation require contrast medium injections. To preserve the renal function in elderly patients suffering from chronic renal insufficiency, a fully echo-guided trans-catheter valve implantation seems to be a reasonable alternative. We report the first successful TA-AVI procedure performed solely under trans-oesophageal echocardiogram control, in the absence of contrast medium injections.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
In this paper a model is developed to describe the three dimensional contact melting process of a cuboid on a heated surface. The mathematical description involves two heat equations (one in the solid and one in the melt), the Navier-Stokes equations for the flow in the melt, a Stefan condition at the phase change interface and a force balance between the weight of the solid and the countering pressure in the melt. In the solid an optimised heat balance integral method is used to approximate the temperature. In the liquid the small aspect ratio allows the Navier-Stokes and heat equations to be simplified considerably so that the liquid pressure may be determined using an igenfunction expansion and finally the problem is reduced to solving three first order ordinary differential equations. Results are presented showing the evolution of the melting process. Further reductions to the system are made to provide simple guidelines concerning the process. Comparison of the solutions with experimental data on the melting of n-octadecane shows excellent agreement.
Resumo:
Aspergillus lentulus, an Aspergillus fumigatus sibling species, is increasingly reported in corticosteroid-treated patients. Its clinical significance is unknown, but the fact that A. lentulus shows reduced antifungal susceptibility, mainly to voriconazole, is of serious concern. Heterologous expression of cyp51A from A. fumigatus and A. lentulus was performed in Saccharomyces cerevisiae to assess differences in the interaction of Cyp51A with the azole drugs. The absence of endogenous ERG11 was efficiently complemented in S. cerevisiae by the expression of either Aspergillus cyp51A allele. There was a marked difference between azole minimum inhibitory concentration (MIC) values of the clones expressing each Aspergillus spp. cyp51A. Saccharomyces cerevisiae clones expressing A. lentulus alleles showed higher MICs to all of the azoles tested, supporting the hypothesis that the intrinsic azole resistance of A. lentulus could be associated with Cyp51A. Homology models of A. fumigatus and A. lentulus Cyp51A protein based on the crystal structure of Cyp51p from Mycobacterium tuberculosis in complex with fluconazole were almost identical owing to their mutual high sequence identity. Molecular dynamics (MD) was applied to both three-dimensional protein models to refine the homology modelling and to explore possible differences in the Cyp51A-voriconazole interaction. After 20ns of MD modelling, some critical differences were observed in the putative closed form adopted by the protein upon voriconazole binding. A closer study of the A. fumigatus and A. lentulus voriconazole putative binding site in Cyp51A suggested that some major differences in the protein's BC loop could differentially affect the lock-up of voriconazole, which in turn could correlate with their different azole susceptibility profiles.
Resumo:
PURPOSE: To determine the diagnostic value of the intravascular contrast agent gadocoletic acid (B-22956) in three-dimensional, free breathing coronary magnetic resonance angiography (MRA) for stenosis detection in patients with suspected or known coronary artery disease. METHODS: Eighteen patients underwent three-dimensional, free breathing coronary MRA of the left and right coronary system before and after intravenous application of a single dose of gadocoletic acid (B-22956) using three different dose regimens (group A 0.050 mmol/kg; group B 0.075 mmol/kg; group C 0.100 mmol/kg). Precontrast scanning followed a coronary MRA standard non-contrast T2 preparation/turbo-gradient echo sequence (T2Prep); for postcontrast scanning an inversion-recovery gradient echo sequence was used (real-time navigator correction for both scans). In pre- and postcontrast scans quantitative analysis of coronary MRA data was performed to determine the number of visible side branches, vessel length and vessel sharpness of each of the three coronary arteries (LAD, LCX, RCA). The number of assessable coronary artery segments was determined to calculate sensitivity and specificity for detection of stenosis > or = 50% on a segment-to-segment basis (16-segment-model) in pre- and postcontrast scans with x-ray coronary angiography as the standard of reference. RESULTS: Dose group B (0.075 mmol/kg) was preferable with regard to improvement of MR angiographic parameters: in postcontrast scans all MR angiographic parameters increased significantly except for the number of visible side branches of the left circumflex artery. In addition, assessability of coronary artery segments significantly improved postcontrast in this dose group (67 versus 88%, p < 0.01). Diagnostic performance (sensitivity, specificity, accuracy) was 83, 77 and 78% for precontrast and 86, 95 and 94% for postcontrast scans. CONCLUSIONS: The use of gadocoletic acid (B-22956) results in an improvement of MR angiographic parameters, asssessability of coronary segments and detection of coronary stenoses > or = 50%.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The choice of sample preparation protocol is a critical influential factor for isoelectric focusing which in turn affects the two-dimensional gel result in terms of quality and protein species distribution. The optimal protocol varies depending on the nature of the sample for analysis and the properties of the constituent protein species (hydrophobicity, tendency to form aggregates, copy number) intended for resolution. This review explains the standard sample buffer constituents and illustrates a series of protocols for processing diverse samples for two-dimensional gel electrophoresis, including hydrophobic membrane proteins. Current methods for concentrating lower abundance proteins, by removal of high abundance proteins, are also outlined. Finally, since protein staining is becoming increasingly incorporated into the sample preparation procedure, we describe the principles and applications of current (and future) pre-electrophoretic labelling methods.
Resumo:
RATIONALE AND OBJECTIVES: Recent developments of MR imaging equipment enabled high-quality steady state-free-precession (Balanced FFE, True-FISP) MR-imaging with a substantial 'T2 like' contrast, resulting in a high signal intensity of the blood-pool without the application of exogenous contrast agents. It is hypothesized that Balanced-FFE may be valuable for contrast enhancement in 3D free-breathing coronary MRA. MATERIALS AND METHODS: Navigator-gated free-breathing cardiac triggered coronary MRA was performed in 10 healthy adult subjects and three patients with radiograph defined coronary artery disease using a segmented k-space 3D Balanced FFE imaging sequence. RESULTS: High contrast-to-noise ratio between the blood-pool and the myocardium (29 +/- 8) and long segment visualization of both coronary arteries could be obtained in about 5 minutes during free breathing using the present navigator-gated Balanced-FFE coronary MRA approach. First patient results demonstrated successful display of coronary artery stenoses. CONCLUSION: Balanced FFE offers a potential alternative for endogenous contrast enhancement in navigator-gated free-breathing 3D coronary MRA. The obtained results together with the relatively short scanning time warrant further studies in larger patient collectives.
Resumo:
PURPOSE: To investigate the feasibility of high-resolution selective three-dimensional (3D) magnetic resonance coronary angiography (MRCA) in the evaluation of coronary artery stenoses. MATERIALS AND METHODS: In 12 patients with coronary artery stenoses, MRCA of the coronary artery groups, including the coronary segments with stenoses of 50% or greater based on conventional x-ray coronary angiography (CAG), was performed with double-oblique imaging planes by orienting the 3D slab along the major axis of each right coronary artery-left circumflex artery (RCA-LCX) group and each left main trunk-left anterior descending artery (LMT-LAD) group. Ten RCA-LCX and five LMT-LAD MR angiograms were obtained, and the results were compared with those of conventional x-ray angiography. RESULTS: Among 70 coronary artery segments expected to be covered, a total of 49 (70%) segments were fully demonstrated in diagnostic quality. The identification of segmental location of stenoses showed as high an accuracy as 96%. The retrospective analysis for stenosis of 50% or greater yielded the sensitivity, specificity, and accuracy of 80%, 85%, and 84%, respectively. CONCLUSION: Selective 3D MRCA has the potential for segment-by-segment evaluation of major portions of the right and left coronary arteries with high accuracy.
Resumo:
Altitudinal tree lines are mainly constrained by temperature, but can also be influenced by factors such as human activity, particularly in the European Alps, where centuries of agricultural use have affected the tree-line. Over the last decades this trend has been reversed due to changing agricultural practices and land-abandonment. We aimed to combine a statistical land-abandonment model with a forest dynamics model, to take into account the combined effects of climate and human land-use on the Alpine tree-line in Switzerland. Land-abandonment probability was expressed by a logistic regression function of degree-day sum, distance from forest edge, soil stoniness, slope, proportion of employees in the secondary and tertiary sectors, proportion of commuters and proportion of full-time farms. This was implemented in the TreeMig spatio-temporal forest model. Distance from forest edge and degree-day sum vary through feed-back from the dynamics part of TreeMig and climate change scenarios, while the other variables remain constant for each grid cell over time. The new model, TreeMig-LAb, was tested on theoretical landscapes, where the variables in the land-abandonment model were varied one by one. This confirmed the strong influence of distance from forest and slope on the abandonment probability. Degree-day sum has a more complex role, with opposite influences on land-abandonment and forest growth. TreeMig-LAb was also applied to a case study area in the Upper Engadine (Swiss Alps), along with a model where abandonment probability was a constant. Two scenarios were used: natural succession only (100% probability) and a probability of abandonment based on past transition proportions in that area (2.1% per decade). The former showed new forest growing in all but the highest-altitude locations. The latter was more realistic as to numbers of newly forested cells, but their location was random and the resulting landscape heterogeneous. Using the logistic regression model gave results consistent with observed patterns of land-abandonment: existing forests expanded and gaps closed, leading to an increasingly homogeneous landscape.
Resumo:
Although NK cells use invariant receptors to identify diseased cells, they nevertheless adapt to their environment, including the presence of certain MHC class I (MHC-I) molecules. This NK cell education, which is mediated by inhibitory receptors specific for MHC-I molecules, changes the responsiveness of activating NK cell receptors (licensing) and modifies the repertoire of MHC-I receptors used by NK cells. The fact that certain MHC-I receptors have the unusual capacity to recognize MHC-I molecules expressed by other cells (trans) and by the NK cell itself (cis) has raised the question regarding possible contributions of the two types of interactions to NK cell education. Although the analysis of an MHC-I receptor variant suggested a role for cis interaction for NK cell licensing, adoptive NK cell transfer experiments supported a key role for trans recognition. To reconcile some of these findings, we have analyzed the impact of cell type-specific deletion of an MHC-I molecule and of a novel MHC-I receptor variant on the education of murine NK cells when these mature under steady-state conditions in vivo. We find that MHC-I expression by NK cells (cis) and by T cells (trans), and MHC-I recognition in cis and in trans, are both needed for NK cell licensing. Unexpectedly, modifications of the MHC-I receptor repertoire are chiefly dependent on cis binding, which provides additional support for an essential role for this unconventional type of interaction for NK cell education. These data suggest that two separate functions of MHC-I receptors are needed to adapt NK cells to self-MHC-I.
Resumo:
Members of the Ly-49 gene family code for class I MHC-specific receptors that regulate NK cell function. Due to a combinatorial distribution of Ly-49 receptors, NK cells display considerable clonal heterogeneity. The acquisition of one Ly-49 receptor, Ly-49A is strictly dependent on the transcriptional trans-acting factor T cell-specific factor-1 (TCF-1). Indeed, TCF-1 binds to two sites in the Ly-49a promoter and regulates its activity, suggesting that the Ly-49a gene is a direct TCF-1 target. TCF-1 deficiency resulted in the altered usage of additional Ly-49 receptors. We show in this study, using TCF-1 beta(2)-microglobulin double-deficient mice, that these repertoire alterations are not due to Ly-49/MHC class I interactions. Our findings rather suggest a TCF-1-dependent, cell autonomous effect on the acquisition of multiple Ly-49 receptors. Besides reduced receptor usage (Ly-49A and D), we also observed no effect (Ly-49C) and significantly expanded (Ly-49G and I) receptor usage in the absence of TCF-1. These effects did not in all cases correlate with the presence of TCF binding sites in the respective proximal promoter. Therefore, besides TCF-1 binding to the proximal promoter, Ly-49 acquisition may also be regulated by TCF-1 binding to more distant cis-acting elements and/or by regulating the expression of additional trans-acting factors. Consistent with the observed differential, positive or negative role of TCF-1 for Ly-49 receptor acquisition, reporter gene assays revealed the presence of an inducing as well as a repressing TCF site in certain proximal Ly-49 promoters. These findings reveal an important role of TCF-1 for the formation of the NK cell receptor repertoire.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).