222 resultados para Open reduction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Transgenic mice overexpressing Notch2 in the uvea exhibit a hyperplastic ciliary body leading to increased IOP and glaucoma. The aim of this study was to investigate the possible presence of NOTCH2 variants in patients with primary open-angle glaucoma (POAG). METHODS: We screened DNA samples from 130 patients with POAG for NOTCH2 variants by denaturing high-performance liquid chromatography after PCR amplification and validated our data by direct Sanger sequencing. RESULTS: No mutations were observed in the coding regions of NOTCH2 or in the splice sites. 19 known SNPs (single nucleotide polymorphisms) were detected. An SNP located in intron 24, c.[4005+45A>G], was seen in 28.5% of the patients (37/130 patients). As this SNP is reported to have a minor allele frequency of 7% in the 1000 genomes database, it could be associated with POAG. However, we evaluated its frequency in an ethnic-matched control group of 96 subjects unaffected by POAG and observed a frequency of 29%, indicating that it was not related to POAG. CONCLUSION: NOTCH2 seemed to be a good candidate for POAG as it is expressed in the anterior segment in the human eye. However, mutational analysis did not show any causative mutation. This study also shows that proper ethnic-matched control groups are essential in association studies and that values given in databases are sometimes misleading.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers an alternative perspective to China's exchange rate policy. It studies a semi-open economy where the private sector has no access to international capital markets but the central bank has full access. Moreover, it assumes limited financial development generating a large demand for saving instruments by the private sector. The paper analyzes the optimal exchange rate policy by modeling the central bank as a Ramsey planner. Its main result is that in a growth acceleration episode it is optimal to have an initial real depreciation of the currency combined with an accumulation of reserves, which is consistent with the Chinese experience. This depreciation is followed by an appreciation in the long run. The paper also shows that the optimal exchange rate path is close to the one that would result in an economy with full capital mobility and no central bank intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was to elicit how patients with delusions with religious contents conceptualized or experienced their spirituality and religiousness. Sixty-two patients with present or past religious delusions went through semistructured interviews, which were analyzed using the three coding steps described in the grounded theory. Three major themes were found in religious delusions: ''spiritual identity,'' ''meaning of illness,'' and ''spiritual figures.'' One higher-order concept was found: ''structure of beliefs.'' We identified dynamics that put these personal beliefs into a constant reconstruction through interaction with the world and others (i.e., open dynamics) and conversely structural dynamics that created a complete rupture with the surrounding world and others (i.e., closed structural dynamics); those dynamics may coexist. These analyses may help to identify psychological functions of delusions with religious content and, therefore, to better conceptualize interventions when dealing with it in psychotherapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The enhanced functional sensitivity offered by ultra-high field imaging may significantly benefit simultaneous EEG-fMRI studies, but the concurrent increases in artifact contamination can strongly compromise EEG data quality. In the present study, we focus on EEG artifacts created by head motion in the static B0 field. A novel approach for motion artifact detection is proposed, based on a simple modification of a commercial EEG cap, in which four electrodes are non-permanently adapted to record only magnetic induction effects. Simultaneous EEG-fMRI data were acquired with this setup, at 7T, from healthy volunteers undergoing a reversing-checkerboard visual stimulation paradigm. Data analysis assisted by the motion sensors revealed that, after gradient artifact correction, EEG signal variance was largely dominated by pulse artifacts (81-93%), but contributions from spontaneous motion (4-13%) were still comparable to or even larger than those of actual neuronal activity (3-9%). Multiple approaches were tested to determine the most effective procedure for denoising EEG data incorporating motion sensor information. Optimal results were obtained by applying an initial pulse artifact correction step (AAS-based), followed by motion artifact correction (based on the motion sensors) and ICA denoising. On average, motion artifact correction (after AAS) yielded a 61% reduction in signal power and a 62% increase in VEP trial-by-trial consistency. Combined with ICA, these improvements rose to a 74% power reduction and an 86% increase in trial consistency. Overall, the improvements achieved were well appreciable at single-subject and single-trial levels, and set an encouraging quality mark for simultaneous EEG-fMRI at ultra-high field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The management of unresectable metastatic colorectal cancer (mCRC) is a comprehensive treatment strategy involving several lines of therapy, maintenance, salvage surgery, and treatment-free intervals. Besides chemotherapy (fluoropyrimidine, oxaliplatin, irinotecan), molecular-targeted agents such as anti-angiogenic agents (bevacizumab, aflibercept, regorafenib) and anti-epidermal growth factor receptor agents (cetuximab, panitumumab) have become available. Ultimately, given the increasing cost of new active compounds, new strategy trials are needed to define the optimal use and the best sequencing of these agents. Such new clinical trials require alternative endpoints that can capture the effect of several treatment lines and be measured earlier than overall survival to help shorten the duration and reduce the size and cost of trials. METHODS/DESIGN: STRATEGIC-1 is an international, open-label, randomized, multicenter phase III trial designed to determine an optimally personalized treatment sequence of the available treatment modalities in patients with unresectable RAS wild-type mCRC. Two standard treatment strategies are compared: first-line FOLFIRI-cetuximab, followed by oxaliplatin-based second-line chemotherapy with bevacizumab (Arm A) vs. first-line OPTIMOX-bevacizumab, followed by irinotecan-based second-line chemotherapy with bevacizumab, and by an anti-epidermal growth factor receptor monoclonal antibody with or without irinotecan as third-line treatment (Arm B). The primary endpoint is duration of disease control. A total of 500 patients will be randomized in a 1:1 ratio to one of the two treatment strategies. DISCUSSION: The STRATEGIC-1 trial is designed to give global information on the therapeutic sequences in patients with unresectable RAS wild-type mCRC that in turn is likely to have a significant impact on the management of this patient population. The trial is open for inclusion since August 2013. TRIAL REGISTRATION: STRATEGIC-1 is registered at Clinicaltrials.gov: NCT01910610, 23 July, 2013. STRATEGIC-1 is registered at EudraCT-No.: 2013-001928-19, 25 April, 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the first implantation of an endograft in 1991, endovascular aneurysm repair (EVAR) rapidly gained recognition. Historical trials showed lower early mortality rates but these results were not maintained beyond 4 years. Despite newer-generation devices, higher rates of reintervention are associated with EVAR during follow-up. Therefore, the best therapeutic decision relies on many parameters that the physician has to take in consideration. Patient's preferences and characteristics are important, especially age and life expectancy besides health status. Aneurysmal anatomical conditions remain probably the most predictive factor that should be carefully evaluated to offer the best treatment. Unfavorable anatomy has been observed to be associated with more complications especially endoleak, leading to more re-interventions and higher risk of late mortality. Nevertheless, technological advances have made surgeons move forward beyond the set barriers. Thus, more endografts are implanted outside the instructions for use despite excellent results after open repair especially in low-risk patients. When debating about AAA repair, some other crucial points should be analysed. It has been shown that strict surveillance is mandatory after EVAR to offer durable results and prevent late rupture. Such program is associated with additional costs and with increased risk of radiation. Moreover, a risk of loss of renal function exists when repetitive imaging and secondary procedures are required. The aim of this article is to review the data associated with abdominal aortic aneurysm and its treatment in order to establish selection criteria to decide between open or endovascular repair.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the proven ability of immunization to reduce Helicobacter infection in mouse models, the precise mechanism of protection has remained elusive. In this study, we evaluated the role of inflammatory monocytes in the vaccine-induced reduction of Helicobacter felis infection. We first showed by using flow cytometric analysis that Ly6C(low) major histocompatibility complex class II-positive chemokine receptor type 2 (CCR2)-positive CD64(+) inflammatory monocytes accumulate in the stomach mucosa during the vaccine-induced reduction of H. felis infection. To determine whether inflammatory monocytes played a role in the protection, these cells were depleted with anti-CCR2 depleting antibodies. Indeed, depletion of inflammatory monocytes was associated with an impaired vaccine-induced reduction of H. felis infection on day 5 postinfection. To determine whether inflammatory monocytes had a direct or indirect role, we studied their antimicrobial activities. We observed that inflammatory monocytes produced tumor necrosis factor alpha and inducible nitric oxide synthase (iNOS), two major antimicrobial factors. Lastly, by using a Helicobacter in vitro killing assay, we showed that mouse inflammatory monocytes and activated human monocytes killed H. pylori in an iNOS-dependent manner. Collectively, these data show that inflammatory monocytes play a direct role in the immunization-induced reduction of H. felis infection from the gastric mucosa.