34 resultados para Local likelihood function
Resumo:
Sophisticated magnetic resonance tagging techniques provide powerful tools for the non-invasive assessment of the local heartwall motion towards a deeper fundamental understanding of local heart function. For the extraction of motion data from the time series of magnetic resonance tagged images and for the visualization of the local heartwall motion a new image analysis procedure has been developed. New parameters have been derived which allows quantification of the motion patterns and are highly sensitive to any changes in these patterns. The new procedure has been applied for heart motion analysis in healthy volunteers and in patient collectives with different heart diseases. The achieved results are summarized and discussed.
Resumo:
Myocardial tagging has shown to be a useful magnetic resonance modality for the assessment and quantification of local myocardial function. Many myocardial tagging techniques suffer from a rapid fading of the tags, restricting their application mainly to systolic phases of the cardiac cycle. However, left ventricular diastolic dysfunction has been increasingly appreciated as a major cause of heart failure. Subtraction based slice-following CSPAMM myocardial tagging has shown to overcome limitations such as fading of the tags. Remaining impediments to this technique, however, are extensive scanning times (approximately 10 min), the requirement of repeated breath-holds using a coached breathing pattern, and the enhanced sensitivity to artifacts related to poor patient compliance or inconsistent depths of end-expiratory breath-holds. We therefore propose a combination of slice-following CSPAMM myocardial tagging with a segmented EPI imaging sequence. Together with an optimized RF excitation scheme, this enables to acquire as many as 20 systolic and diastolic grid-tagged images per cardiac cycle with a high tagging contrast during a short period of sustained respiration.
Resumo:
BACKGROUND: The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. RESULTS: Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. CONCLUSION: ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
MOTIVATION: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. RESULTS: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total).Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. CONTACT: selectome@unil.ch or nicolas.salamin@unil.ch.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
Although tumor-specific CD8 T-cell responses often develop in cancer patients, they rarely result in tumor eradication. We aimed at studying directly the functional efficacy of tumor-specific CD8 T cells at the site of immune attack. Tumor lesions in lymphoid and nonlymphoid tissues (metastatic lymph nodes and soft tissue/visceral metastases, respectively) were collected from stage III/IV melanoma patients and investigated for the presence and function of CD8 T cells specific for the tumor differentiation antigen Melan-A/MART-1. Comparative analysis was conducted with peripheral blood T cells. We provide evidence that in vivo-priming selects, within the available naive Melan-A/MART-1-specific CD8 T-cell repertoire, cells with high T-cell receptor avidity that can efficiently kill melanoma cells in vitro. In vivo, primed Melan-A/MART-1-specific CD8 T cells accumulate at high frequency in both lymphoid and nonlymphoid tumor lesions. Unexpectedly, however, whereas primed Melan-A/MART-1-specific CD8 T cells that circulate in the blood display robust inflammatory and cytotoxic functions, those that reside in tumor lesions (particularly in metastatic lymph nodes) are functionally tolerant. We show that both the lymph node and the tumor environments blunt T-cell effector functions and offer a rationale for the failure of tumor-specific responses to effectively counter tumor progression.
Resumo:
BACKGROUND: Coronary endothelial function is abnormal in patients with established coronary artery disease and was recently shown by MRI to relate to the severity of luminal stenosis. Recent advances in MRI now allow the noninvasive assessment of both anatomic and functional (endothelial function) changes that previously required invasive studies. We tested the hypothesis that abnormal coronary endothelial function is related to measures of early atherosclerosis such as increased coronary wall thickness. METHODS AND RESULTS: Seventeen arteries in 14 healthy adults and 17 arteries in 14 patients with nonobstructive coronary artery disease were studied. To measure endothelial function, coronary MRI was performed before and during isometric handgrip exercise, an endothelial-dependent stressor, and changes in coronary cross-sectional area and flow were measured. Black blood imaging was performed to quantify coronary wall thickness and indices of arterial remodeling. The mean stress-induced change in cross-sectional area was significantly higher in healthy adults (13.5%±12.8%, mean±SD, n=17) than in those with mildly diseased arteries (-2.2%±6.8%, P<0.0001, n=17). Mean coronary wall thickness was lower in healthy subjects (0.9±0.2 mm) than in patients with coronary artery disease (1.4±0.3 mm, P<0.0001). In contrast to healthy subjects, stress-induced changes in cross-sectional area, a measure of coronary endothelial function, correlated inversely with coronary wall thickness in patients with coronary artery disease (r=-0.73, P=0.0008). CONCLUSIONS: There is an inverse relationship between coronary endothelial function and local coronary wall thickness in patients with coronary artery disease but not in healthy adults. These findings demonstrate that local endothelial-dependent functional changes are related to the extent of early anatomic atherosclerosis in mildly diseased arteries. This combined MRI approach enables the anatomic and functional investigation of early coronary disease.
Resumo:
Microcirculation (2010) 17, 69-78. doi: 10.1111/j.1549-8719.2010.00002.x Abstract Background: This study was designed to explore the effect of transient inducible nitric oxide synthase (iNOS) overexpression via cationic liposome-mediated gene transfer on cardiac function, fibrosis, and microvascular perfusion in a porcine model of chronic ischemia. Methods and Results: Chronic myocardial ischemia was induced using a minimally invasive model in 23 landrace pigs. Upon demonstration of heart failure, 10 animals were treated with liposome-mediated iNOS-gene-transfer by local intramyocardial injection and 13 animals received a sham procedure to serve as control. The efficacy of this iNOS-gene-transfer was demonstrated for up to 7 days by reverse transcriptase-polymerase chain reaction in preliminary studies. Four weeks after iNOS transfer, magnetic resonance imaging showed no effect of iNOS overexpression on cardiac contractility at rest and during dobutamine stress (resting ejection fraction: control 27%, iNOS 26%; P = ns). Late enhancement, infarct size, and the amount of fibrosis were similar between groups. Although perfusion and perfusion reserve in response to adenosine and dobutamine were not significantly modified by iNOS-transfer, both vessel number and diameter were significantly increased in the ischemic area in the iNOS-treated group versus control (point score: control 15.3, iNOS 34.7; P < 0.05). Conclusions: Our findings demonstrate that transient iNOS overexpression does not aggravate cardiac dysfunction or postischemic fibrosis, while potentially contributing to neovascularization in the chronically ischemic heart.
Resumo:
Thymic stromal lymphopoietin (TSLP) is a mucosal tissue-associated cytokine that has been widely studied in the context of T helper type 2 (Th2)-driven inflammatory disorders. Although TSLP is also produced upon viral infection in vitro, the role of TSLP in antiviral immunity is unknown. In this study we report a novel role for TSLP in promoting viral clearance and virus-specific CD8+ T-cell responses during influenza A infection. Comparing the immune responses of wild-type and TSLP receptor (TSLPR)-deficient mice, we show that TSLP was required for the expansion and activation of virus-specific effector CD8+ T cells in the lung, but not the lymph node. The mechanism involved TSLPR signaling on newly recruited CD11b+ inflammatory dendritic cells (DCs) that acted to enhance interleukin-15 production and expression of the costimulatory molecule CD70. Taken together, these data highlight the pleiotropic activities of TSLP and provide evidence for its beneficial role in antiviral immunity.
Resumo:
The thesis examines the impact of collective war victimization on individuals' readiness to accept or assign collective guilt for past war atrocities. As a complement to previous studies, its aim is to articulate an integrated approach to collective victimization, which distinguishes between individual-, communal-, and societal-level consequences of warfare. Building on a social representation approach, it is guided by the assumption that individuals form beliefs about a conflict through their personal experiences of victimization, communal experiences of warfare that occur in their proximal surrounding, and the mass- mediatised narratives that circulate in a society's public sphere. Four empirical studies test the hypothesis that individuals' beliefs about the conflict depend on the level and type of war experiences to which they have been exposed, that is, on informative and normative micro and macro contexts in which they are embedded. The studies have been conducted in the context of the Yugoslav wars that attended the breakup of Yugoslavia, a series of wars fought between 1991 and 2001 during which numerous war atrocities were perpetrated causing a massive victimisation of population. To examine the content and impact of war experiences at each level of analysis, the empirical studies employed various methodological strategies, from quantitative analyses of a representative public opinion survey, to qualitative analyses of media content and political speeches. Study 1 examines the impact of individual- and communal- level war experiences on individuals' acceptance and assignment of collective guilt. It further examines the impact of the type of communal level victimization: exposure to symmetric (i.e., violence that similarly affects members of different ethnic groups, including adversaries) and asymmetric violence. The main goal of Study 2 is to examine the structural and political circumstances that enhance collective guilt assignment. While the previous studies emphasize the role of past victimisation, Study 2 tests the assumption that the political demobilisation strategy employed by elites facing public discontent in the collective system-threatening circumstances can fuel out-group blame. Studies 3 and 4 have been conducted predominantly in the context of Croatia and examine rhetoric construction of the dominant politicized narrative of war in a public sphere (Study 3) and its maintenance through public delegitimization of alternative (critical) representations (Study 4). Study 4 further examines the likelihood that highly identified group members adhere to publicly delegitimized critical stances on war. - Cette thèse étudie l'impact de la victimisation collective de guerre sur la capacité des individus à accepter ou à attribuer une culpabilité collective liée à des atrocités commises en temps de guerre. En compléments aux recherches existantes, le but de ce travail est de définir une approche intégrative de la victimisation collective, qui distingue les conséquences de la guerre aux niveaux individuel, régional et sociétal. En partant de l'approche des représentations sociales, cette thèse repose sur le postulat que les individus forment des croyances sur un conflit au travers de leurs expériences personnelles de victimisation, de leurs expériences de guerre lorsque celle-ci se déroule près d'eux, ainsi qu'au travers des récits relayés par les mass media. Quatre études testent l'hypothèse que les croyances des individus dépendent des niveaux et des types d'expériences de guerre auxquels ils ont été exposés, c'est-à-dire, des contextes informatifs et normatifs, micro et macro dans lesquels ils sont insérés. Ces études ont été réalisées dans le contexte des guerres qui, entre 1991 et 2001, ont suivi la dissolution de la Yougoslavie et durant lesquelles de nombreuses atrocités de guerre ont été commises, causant une victimisation massive de la population. Afin d'étudier le contenu et l'impact des expériences de guerre sur chaque niveau d'analyse, différentes stratégies méthodologiques ont été utilisées, des analyses quantitatives sur une enquête représentative d'opinion publique aux analyses qualitatives de contenu de médias et de discours politiques. L'étude 1 étudie l'impact des expériences de guerre individuelles et régionales sur l'acceptation et l'attribution de la culpabilité collective par les individus. Elle examine aussi l'impact du type de victimisation régionale : exposition à la violence symétrique (i.e., violence qui touche les membres de différents groupes ethniques, y compris les adversaires) et asymétrique. L'étude 2 se penche sur les circonstances structurelles et politiques qui augmentent l'attribution de culpabilité collective. Alors que les recherches précédentes ont mis l'accent sur le rôle de la victimisation passée, l'étude 2 teste l'hypothèse que la stratégie de démobilisation politique utilisée par les élites pour faire face à l'insatisfaction publique peut encourager l'attribution de la culpabilité à l'exogroupe. Les études 3 et 4 étudient, principalement dans le contexte croate, la construction rhétorique du récit de guerre politisé dominant (étude 3) et son entretien à travers la délégitimation publique des représentations alternatives (critiques] (étude 4). L'étude 4 examine aussi la probabilité qu'ont les membres de groupe fortement identifiés d'adhérer à des points de vue sur la guerre critiques et publiquement délégitimés.
Resumo:
OBJECTIVEEvaluate whether healthy or diabetic adult mice can tolerate an extreme loss of pancreatic α-cells and how this sudden massive depletion affects β-cell function and blood glucose homeostasis.RESEARCH DESIGN AND METHODSWe generated a new transgenic model allowing near-total α-cell removal specifically in adult mice. Massive α-cell ablation was triggered in normally grown and healthy adult animals upon diphtheria toxin (DT) administration. The metabolic status of these mice was assessed in 1) physiologic conditions, 2) a situation requiring glucagon action, and 3) after β-cell loss.RESULTSAdult transgenic mice enduring extreme (98%) α-cell removal remained healthy and did not display major defects in insulin counter-regulatory response. We observed that 2% of the normal α-cell mass produced enough glucagon to ensure near-normal glucagonemia. β-Cell function and blood glucose homeostasis remained unaltered after α-cell loss, indicating that direct local intraislet signaling between α- and β-cells is dispensable. Escaping α-cells increased their glucagon content during subsequent months, but there was no significant α-cell regeneration. Near-total α-cell ablation did not prevent hyperglycemia in mice having also undergone massive β-cell loss, indicating that a minimal amount of α-cells can still guarantee normal glucagon signaling in diabetic conditions.CONCLUSIONSAn extremely low amount of α-cells is sufficient to prevent a major counter-regulatory deregulation, both under physiologic and diabetic conditions. We previously reported that α-cells reprogram to insulin production after extreme β-cell loss and now conjecture that the low α-cell requirement could be exploited in future diabetic therapies aimed at regenerating β-cells by reprogramming adult α-cells.
Resumo:
BACKGROUND: We estimated the heritability of three measures of glomerular filtration rate (GFR) in hypertensive families of African descent in the Seychelles (Indian Ocean). METHODS: Families with at least two hypertensive siblings and an average of two normotensive siblings were identified through a national hypertension register. Using the ASSOC program in SAGE (Statistical Analysis in Genetic Epidemiology), the age- and gender-adjusted narrow sense heritability of GFR was estimated by maximum likelihood assuming multivariate normality after power transformation. ASSOC can calculate the additive polygenic component of the variance of a trait from pedigree data in the presence of other familial correlations. The effects of body mass index (BMI), blood pressure, natriuresis, along with sodium to potassium ratio in urine and diabetes, were also tested as covariates. RESULTS: Inulin clearance, 24-hour creatinine clearance, and GFR based on the Cockcroft-Gault formula were available for 348 persons from 66 pedigrees. The age- and gender-adjusted correlations (+/- SE) were 0.51 (+/- 0.04) between inulin clearance and creatinine clearance, 0.53 (+/- 0.04) between inulin clearance and Cockcroft-Gault formula and 0.66 (+/- 0.03) between creatinine clearance and Cockcroft-Gault formula. The age- and gender-adjusted heritabilities (+/- SE) of GFR were 0.41 (+/- 0.10) for inulin clearance, 0.52 (+/- 0.13) for creatinine clearance, and 0.82 (+/- 0.09) for Cockcroft-Gault formula. Adjustment for BMI slightly lowered the correlations and heritabilities for all measurements whereas adjustment for blood pressure had virtually no effect. CONCLUSION: The significant heritability estimates of GFR in our sample of families of African descent confirm the familial aggregation of this trait and justify further analyses aimed at discovering genetic determinants of GFR.
Resumo:
BACKGROUND: The risk/benefit profile of intravitreal melphalan injection for treatment of active vitreous seeds in retinoblastoma remains uncertain. We report clinical and electroretinography results after 6 months of one patient who has shown a favorable initial clinical response to intravitreal melphalan injections for treatment of refractory vitreous seeds. METHODS: Clinical case report. PATIENT: The patient presented at age 17 months with bilateral retinoblastoma [OD: International Classification (ICRB) group E, Reese-Ellsworth (R-E) class Vb; OS: ICRB D, R-E Vb] with no known prior family history. The right eye was enucleated primarily. The patient received systemic chemotherapy and extensive local treatment to the left eye. Ten months later, she presented with recurrent disease, including fine, diffuse vitreous seeds. Tumor control was established with intra-arterial chemotherapy and local treatment. Subsequent recurrence was treated with further intra-arterial chemotherapy, local treatment, and plaque radiotherapy with iodine-125. Persistent free-floating spherical vitreous seeds were treated with 4 cycles of intravitreal melphalan injection via the pars plana, with doses of 30, 30, 30, and 20 μg. RESULTS: After 6 months of follow-up, the left eye remained free of active tumor. Visual acuity was 20/40. Photopic ERGs amplitudes were unchanged compared with those recorded prior to the intravitreal injection treatments. CONCLUSIONS: Intravitreal melphalan injection for refractory spherical vitreous seeds of retinoblastoma with favorable tumor response is compatible with good central visual acuity and preservation of retinal function as indicated by photopic ERG recordings.
Resumo:
OBJECTIVES/HYPOTHESIS: Facial nerve regeneration is limited in some clinical situations: in long grafts, by aged patients, and when the delay between nerve lesion and repair is prolonged. This deficient regeneration is due to the limited number of regenerating nerve fibers, their immaturity and the unresponsiveness of Schwann cells after a long period of denervation. This study proposes to apply glial cell line-derived neurotrophic factor (GDNF) on facial nerve grafts via nerve guidance channels to improve the regeneration. METHODS: Two situations were evaluated: immediate and delayed grafts (repair 7 months after the lesion). Each group contained three subgroups: a) graft without channel, b) graft with a channel without neurotrophic factor; and c) graft with a GDNF-releasing channel. A functional analysis was performed with clinical observation of facial nerve function, and nerve conduction study at 6 weeks. Histological analysis was performed with the count of number of myelinated fibers within the graft, and distally to the graft. Central evaluation was assessed with Fluoro-Ruby retrograde labeling and Nissl staining. RESULTS: This study showed that GDNF allowed an increase in the number and the maturation of nerve fibers, as well as the number of retrogradely labeled neurons in delayed anastomoses. On the contrary, after immediate repair, the regenerated nerves in the presence of GDNF showed inferior results compared to the other groups. CONCLUSIONS: GDNF is a potent neurotrophic factor to improve facial nerve regeneration in grafts performed several months after the nerve lesion. However, GDNF should not be used for immediate repair, as it possibly inhibits the nerve regeneration.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.