307 resultados para Run-up
Resumo:
BACKGROUND: Postoperative chemoradiotherapy (CRT) of gastric carcinoma improves survival among high- risk patients. This study was undertaken to analyse long-term survival probability and the impact of certain covariates on the survival outcome in affected individuals. MATERIALS AND METHODS: Between January 2000 and December 2005, 244 patients with gastric cancer underwent adjuvant radiotherapy (RT) in our institution. Data were retrieved retrospectively from patient files and analysed with SPSS version 21.0. RESULTS: A total of 244 cases, with a male to female ratio of 2.2:1, were enrolled in the study. The median age of the patients was 52 years (range, 20-78 years). Surgical margin status was positive or close in 72 (33%) out of 220 patients. Postoperative adjuvant RT dose was 46 Gy. Median follow-up was 99 months (range, 79-132 months) and 23 months (range, 2-155 months) for surviving patients and all patients, respectively. Actuarial overall survival (OS) probability for 1-, 3-, 5- and 10-year was 79%, 37%, 24% and 16%, respectively. Actuarial progression free survival (PFS) probability was 69%, 34%, 23% and 16% in the same consecutive order. AJCC Stage I-II disease, subtotal gastrectomy and adjuvant CRT were significantly associated with improved OS and PFS in multivariate analyses. Surgical margin status or lymph node dissection type were not prognostic for survival. CONCLUSIONS: Postoperative CRT should be considered for all patients with high risk of recurrence after gastrectomy. Beside well-known prognostic factors such as stage, lymph node status and concurrent chemotherapy, the type of gastrectomy was an important prognostic factor in our series. With our findings we add to the discussion on the definition of required surgical margin for subtotal gastrectomy. We consider that our observations in gastric cancer patients in our clinic can be useful in the future randomised trials to point the way to improved outcomes.
Resumo:
AIMS: To evaluate the very long-term risk of recurrent thromboembolic events in patients treated by percutaneous PFO closure. METHODS AND RESULTS: Between 1998 and 2008, a total of 232 consecutive patients with PFO and a high suspicion of paradoxical embolism were treated by percutaneous closure. The following major events were observed during hospitalisation: implantation failure (one patient) and appearance of an acute left-sided device thrombus requiring surgery (one patient). The primary endpoint of the study was a recurrent embolic event beyond at least five years' follow-up. During a mean follow-up of 7.6±2.4 years, this event occurred in five patients, representing a 0.28% annual/patient risk. Other major complications during follow-up were the following: late thrombus formation on the device (two patients) and transient atrial fibrillation (15 patients). Three patients died during follow-up from cardiovascular causes considered not related to the index procedure. The PFO was judged closed on follow-up echocardiography in 92.3% of patients. CONCLUSIONS: Long-term follow-up following percutaneous PFO closure for presumed paradoxical embolism reveals very low recurrence rates. This observation should be put in perspective with recent published randomised trials comparing percutaneous closure and medical therapy.
Resumo:
Background: In most of the emergency departments (ED) in developed countries, a subset of patients visits the ED frequently. Despite their small numbers, these patients are the source of a disproportionally high number of all ED visits, and use a significant proportion of healthcare resources. They place a heavy economic burden on hospital and healthcare system budgets overall. In order to improve the management of these patients, the University hospital of Lausanne, Switzerland implemented a case management intervention (CM) between May 2012 and July 2013. In this randomized controlled trial, 250 frequent ED users (visits>5 during previous 12 months) were allocated to either the CM group or the standard ED care (SC) group and followed up for 12 months. The first result of the CM was to reduce significantly the ED visits. The present study examined whether the CM intervention also reduced the costs generated by the ED frequent users not only from the hospital perspective, but also from the healthcare system perspective. Methods: Cost data were obtained from the hospital's analytical accounting system and from health insurances. Multivariate linear models including a fixed effect "group" and socio-demographic characteristics and health-related variables were run.
Resumo:
Paradoxically, high-growth, high-investment developing countries tend to experience capital outflows. This paper shows that this allocation puzzle can be explained simply by introducing uninsurable idiosyncratic investment risk in the neoclassical growth model with international trade in bonds, and by taking into account not only TFP catch-up, but also the capital wedge, that is, the distortions on the return to capital. The model fits the two following facts, documented on a sample of 67 countries between 1980 and 2003: (i) TFP growth is positively correlated with capital outflows in a sample including creditor countries; (ii) the long-run level of capital per efficient unit of labor is positively correlated with capital outflows. Consistently, we show that the capital flows predicted by the model are positively correlated with the actual ones in this sample once the capital wedge is accounted for. The fact that Asia dominates global imbalances can be explained by its relatively low capital wedge.
Resumo:
GNbAC1 is a humanized monoclonal antibody targeting MSRV-Env, an endogenous retroviral protein, which is expressed in multiple sclerosis (MS) lesions, is pro-inflammatory and inhibits oligodendrocyte precursor cell differentiation. This paper describes the open-label extension up to 12months of a trial testing GNbAC1 in 10 MS patients at 2 and 6mg/kg. The primary objective was to assess GNbAC1 safety, and other objectives were pharmacokinetic and pharmacodynamic assessments. During the extended study, no safety issues occurred in the 8 remaining patients. No anti-GNbAC1 antibodies were detected. GNbAC1 appears well tolerated.
Resumo:
Weight regain after caloric restriction results in accelerated fat storage in adipose tissue. This catch-up fat phenomenon is postulated to result partly from suppressed skeletal muscle thermogenesis, but the underlying mechanisms are elusive. We investigated whether the reduced rate of skeletal muscle contraction-relaxation cycle that occurs after caloric restriction persists during weight recovery and could contribute to catch-up fat. Using a rat model of semistarvation-refeeding, in which fat recovery is driven by suppressed thermogenesis, we show that contraction and relaxation of leg muscles are slower after both semistarvation and refeeding. These effects are associated with (i) higher expression of muscle deiodinase type 3 (DIO3), which inactivates tri-iodothyronine (T3), and lower expression of T3-activating enzyme, deiodinase type 2 (DIO2), (ii) slower net formation of T3 from its T4 precursor in muscles, and (iii) accumulation of slow fibers at the expense of fast fibers. These semistarvation-induced changes persisted during recovery and correlated with impaired expression of transcription factors involved in slow-twitch muscle development. We conclude that diminished muscle thermogenesis following caloric restriction results from reduced muscle T3 levels, alteration in muscle-specific transcription factors, and fast-to-slow fiber shift causing slower contractility. These energy-sparing effects persist during weight recovery and contribute to catch-up fat.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.