209 resultados para strategies alignment
Resumo:
Background: Cardiac magnetic resonance (CMR) is accepted as a method to assess suspected coronary artery disease (CAD). Nonetheless, invasive coronary angiography (CXA) combined or not with fractional flow reserve (FFR) remains the main diagnostic test to evaluate CAD. Little data exist on the economic impact of the use of these procedures in a population with a low to intermediate pre-test probability. Objective: To compare the costs of 3 decision strategies to revascularize a patient with suspected CAD: 1) strategy guided by CMR 2) hypothetical strategy guided by CXA-FFR, 3) hypothetical strategy guided by CXA alone.
Resumo:
BACKGROUND: The radial artery is routinely used as a graft for surgical arterial myocardial revascularization. The proximal radial artery anastomosis site remains unknown. In this study, we analyzed the short-term results and the operative risk determinants after having used four different common techniques for radial artery implantation. METHODS: From January 2000 to December 2004, 571 patients underwent coronary artery bypass grafting with radial arteries. Data were analyzed for the entire population and for subgroups following the proximal radial artery anastomosis site: 140 T-graft with the mammary artery (group A), 316 free-grafts with the proximal anastomosis to the ascending aorta (group B), 55 mammary arteries in situ elongated with the radial artery (group C) and 60 radial arteries elongated with a piece of mammary artery and anastomosed to the ascending aorta (group D). RESULTS: The mean age was 53.8 +/- 7.7 years; 55.5% of patients had a previous myocardial infarction and 73% presented with a satisfactory left ventricular function. A complete arterial myocardial revascularization was achieved in 532 cases (93.2%) and 90.2% of the procedures were performed under cardiopulmonary bypass and cardioplegic arrest. The operative mortality rate was 0.9%, a postoperative myocardial infarction was diagnosed in 19 patients (3.3%), an intra-aortic balloon pump was used in 10 patients (1.7%) and a mechanical circulatory device was implanted in 2 patients. The radial artery harvesting site remained always free from complications. The proximal radial artery anastomosis site was not a determinant of early hospital mortality. Group C showed a higher risk of postoperative myocardial infarction (p = 0.09), together with female gender (p = 0.003), hypertension (p = 0.059) and a longer cardiopulmonary bypass time. CONCLUSIONS: The radial artery and the mammary artery can guarantee multiple arterial revascularization also for patients with contraindications to double mammary artery use. The four most common techniques for proximal radial artery anastomosis are not related to a higher operative risk and they can be used alternatively to reach the best surgical results
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Complications of different ventilation strategies in endoscopic laryngeal surgery: a 10-year review.
Resumo:
BACKGROUND: Spontaneous ventilation, mechanical controlled ventilation, apneic intermittent ventilation, and jet ventilation are commonly used during interventional suspension microlaryngoscopy. The aim of this study was to investigate specific complications of each technique, with special emphasis on transtracheal and transglottal jet ventilation. METHODS: The authors performed a retrospective single-institution analysis of a case series of 1,093 microlaryngoscopies performed in 661 patients between January 1994 and January 2004. Data were collected from two separate prospective databases. Feasibility and complications encountered with each technique of ventilation were analyzed as main outcome measures. RESULTS: During 1,093 suspension microlaryngoscopies, ventilation was supplied by mechanical controlled ventilation via small endotracheal tubes (n = 200), intermittent apneic ventilation (n = 159), transtracheal jet ventilation (n = 265), or transglottal jet ventilation (n = 469). Twenty-nine minor and 4 major complications occurred. Seventy-five percent of the patients with major events had an American Society of Anesthesiologists physical status classification of III. Five laryngospasms were observed with apneic intermittent ventilation. All other 24 complications (including 7 barotrauma) occurred during jet ventilation. Transtracheal jet ventilation was associated with a significantly higher complication rate than transglottal jet ventilation (P < 0.0001; odds ratio, 4.3 [95% confidence interval, 1.9-10.0]). All severe complications were related to barotraumas resulting from airway outflow obstruction during jet ventilation, most often laryngospasms. CONCLUSIONS: The use of a transtracheal cannula was the major independent risk factor for complications during jet ventilation for interventional microlaryngoscopy. The anesthetist's vigilance in clinically detecting and preventing outflow airway obstruction remains the best prevention of barotrauma during subglottic jet ventilation.
Resumo:
UKPDS and DCCT studies have demonstrated the critical role of tight glycaemic control to reduce the micro- and macro-vascular damage linked to diabetes. Unfortunately, the insulin requirement of type 2 diabetic patients remains elevated since 5 to 7% of these patients will required, yearly, a change from oral antidiabetic drug to insulin treatment to maintain a good glycaemic control. This manuscript is intended to review the currently available oral antidiabetic drugs, their benefits as well as potential arms and to propose a simplified therapeutic strategy in presence of type 2 diabetes.
Resumo:
Abstract The maintenance of genetic variation is a long-standing issue because the adaptive value of life-history strategies associated with each genetic variant is usually unknown. However, evidence for the coexistence of alternative evolutionary fixed strategies at the population level remains scarce. Because in the tawny owl (Strix aluco) heritable melanin-based coloration shows different physiological and behavioral norms of reaction, we investigated whether coloration is associated with investment in maintenance and reproduction. Light melanic owls had lower adult survival compared to dark melanic conspecifics, and color variation was related to the trade-off between offspring number and quality. When we experimentally enlarged brood size, light melanic males produced more fledglings but in poorer condition, and they were less often recruited in the local breeding population than those of darker melanic conspecifics. Our results also suggest that dark melanic males allocate a constant effort to raise their brood independently of environmental conditions, whereas lighter melanic males finely adjust reproductive effort in relation to changes in environmental conditions. Color traits can therefore be associated with life-history strategies, and stochastic environmental perturbation can temporarily favor one phenotype over others. The existence of fixed strategies implies that some phenotypes can sometimes display a "maladapted" strategy. Long-term population monitoring is therefore vital for a full understanding of how different genotypes deal with trade-offs.
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.
Resumo:
The management of gliomas remains challenging and requires a multidisciplinary approach that involves neurosurgeons, radiation therapists and oncologists. For patients with glioblastomas, progress has been made in recent years with the introduction of a combined modality treatment associating radiation therapy and concomitant chemotherapy with the novel alkylating agent temozolomide. This combination resulted in a significant prolongation of survival and increase in the number of patients with survival well beyond two years. Since then, interest in developing new agents in this disease has dramatically increased. In parallel, molecular markers, such as methylation status of MGMT or identification of the translocation of 1p and 19q in oligodendrogliomas have allowed to identify distinct subtypes with exquisite response to treatment or different prognosis. These developments have implications for the development of clinical trials of new potential drug treatments. In this article, we provide a review of the current management of low- and high-grade gliomas, including astrocytomas, oligodendrogliomas and glioblastomas and provide an outlook into future potential therapies.
Resumo:
MOTIVATION: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. RESULTS: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total).Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. CONTACT: selectome@unil.ch or nicolas.salamin@unil.ch.