17 resultados para Standard model

em Université de Lausanne, Switzerland


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Although the pig is a standard model for the evaluation of various diseases in humans, including coagulopathy, it is not clear whether results in animals can be extrapolated to man.Materials and methods: In 75 anesthetized pigs, we assessed reagent-supported thrombelastometry (ExTEM (R)), platelet-blocked thrombelastometry (FibTEM (R)), and aprotinin thrombelastometry (ApTEM (R)). Results were compared to values from 13 anesthetized humans.Results (median, 95% CI): ExTEM (R) : While clot strength was comparable in pigs (66 mm, 65-67 mm) and in humans (64 mm, 60-68 mm; NS), clotting time in animals was longer (pigs 64 s, 62-66 s; humans 55 s, 49-71 s; P<0.05) and clot formation time shorter (pigs 52 s, 49-54 s; humans 83 s, 67-98 s, P<0.001). The clot lysis index at 30 minutes was lower in animals (96.9%, 95.1-97.3%) than in humans (99.5%, 98.6-99.9%; P<0.001). ApTEM (R) showed no hyperfibrinolysis in animals. Modification of the anesthesia protocol in animals resulted in significant ExTEM (R) changes. FibTEM (R) : Complete platelet inhibition yielded significantly higher platelet contribution to clot strength in pigs (79%, 76-81%) than in humans (73%, 71-77%; P<0.05), whereas fibrinogen contribution to clot strength was higher in humans (27%, 24-29%) than in animals (21%, 19-24%; P<0.05).Conclusions: Maximum clot firmness is comparable in human and porcine blood. However, clot lysis, platelet and fibrinogen contribution to clot strength, as well as initiation and propagation of clotting, are considerably different between pigs and humans. In addition, anesthesic drugs seem to influence thrombelastometry in animals. Accordingly, coagulation abnormalities in pigs subjected to diseases may not necessarily represent the coagulation profile in sick patients. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: EEG and somatosensory evoked potential are highly predictive of poor outcome after cardiac arrest; their accuracy for good recovery is however low. We evaluated whether addition of an automated mismatch negativity-based auditory discrimination paradigm (ADP) to EEG and somatosensory evoked potential improves prediction of awakening. METHODS: EEG and ADP were prospectively recorded in 30 adults during therapeutic hypothermia and in normothermia. We studied the progression of auditory discrimination on single-trial multivariate analyses from therapeutic hypothermia to normothermia, and its correlation to outcome at 3 months, assessed with cerebral performance categories. RESULTS: At 3 months, 18 of 30 patients (60%) survived; 5 had severe neurologic impairment (cerebral performance categories = 3) and 13 had good recovery (cerebral performance categories = 1-2). All 10 subjects showing improvements of auditory discrimination from therapeutic hypothermia to normothermia regained consciousness: ADP was 100% predictive for awakening. The addition of ADP significantly improved mortality prediction (area under the curve, 0.77 for standard model including clinical examination, EEG, somatosensory evoked potential, versus 0.86 after adding ADP, P = 0.02). CONCLUSIONS: This automated ADP significantly improves early coma prognostic accuracy after cardiac arrest and therapeutic hypothermia. The progression of auditory discrimination is strongly predictive of favorable recovery and appears complementary to existing prognosticators of poor outcome. Before routine implementation, validation on larger cohorts is warranted.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The function of most proteins is not determined experimentally, but is extrapolated from homologs. According to the "ortholog conjecture", or standard model of phylogenomics, protein function changes rapidly after duplication, leading to paralogs with different functions, while orthologs retain the ancestral function. We report here that a comparison of experimentally supported functional annotations among homologs from 13 genomes mostly supports this model. We show that to analyze GO annotation effectively, several confounding factors need to be controlled: authorship bias, variation of GO term frequency among species, variation of background similarity among species pairs, and propagated annotation bias. After controlling for these biases, we observe that orthologs have generally more similar functional annotations than paralogs. This is especially strong for sub-cellular localization. We observe only a weak decrease in functional similarity with increasing sequence divergence. These findings hold over a large diversity of species; notably orthologs from model organisms such as E. coli, yeast or mouse have conserved function with human proteins.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We use cryo-electron microscopy (cryo-EM) to study the 3D shapes of 94-bp-long DNA minicircles and address the question of whether cyclization of such short DNA molecules necessitates the formation of sharp, localized kinks in DNA or whether the necessary bending can be redistributed and accomplished within the limits of the elastic, standard model of DNA flexibility. By comparing the shapes of covalently closed, nicked and gapped DNA minicircles, we conclude that 94-bp-long covalently closed and nicked DNA minicircles do not show sharp kinks while gapped DNA molecules, containing very flexible single-stranded regions, do show sharp kinks. We corroborate the results of cryo-EM studies by using Bal31 nuclease to probe for the existence of kinks in 94-bp-long minicircles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'expérience Belle, située dans le centre de recherche du KEK, au Japon, est consacrée principalement à l'étude de la violation de CP dans le système des mésons B. Elle est placée sur le collisionneur KEKB, qui produit des paires Banti-B. KEKB, l'une des deux « usines à B » actuellement en fonction, détient le record du nombre d'événements produits avec plus de 150 millions de paires. Cet échantillon permet des mesures d'une grande précision dans le domaine de la physique du méson B. C'est dans le cadre de ces mesures de précision que s'inscrit cette analyse. L'un des phénomènes remarquables de la physique des hautes énergies est la faculté qu'a l'interaction faible de coupler un méson neutre avec son anti-méson. Dans le présent travail, nous nous intéressons au méson B neutre couplé à l'anti-méson B neutre, avec une fréquence d'oscillation _md mesurable précisément. Outre la beauté de ce phénomène lui-même, une telle mesure trouve sa place dans la quête de l'origine de la violation de CP. Cette dernière n'est incluse que d'une façon peu satisfaisante dans le modèle standard des interactions électro-faibles. C'est donc la recherche de phénomènes physiques encore inexpliqués qui motive en premier lieu la collaboration Belle. Il existe déjà de nombreuses mesures de _md antérieures. Celle que nous présentons ici est cependant d'une précision encore jamais atteinte grâce, d'une part, à l'excellente performance de KEKB et, d'autre part, à une approche originale qui permet de réduire considérablement la contamination de la mesure par des événements indésirés. Cette approche fut déjà mise à profit par d'autres expériences, dans des conditions quelque peu différentes de celles de Belle. La méthode utilisée consiste à reconstruire partiellement l'un des mésons dans le canal ___D*(D0_)l_l, en n'utilisant que les informations relatives au lepton l et au pion _. L'information concernant l'autre méson de la paire Banti-B initiale n'est tirée que d'un seul lepton de haute énergie. Ainsi, l'échantillon à disposition ne souffre pas de grandes réductions dues à une reconstruction complète, tandis que la contamination due aux mésons B chargés, produits par KEKB en quantité égale aux B0, est fortement diminuée en comparaison d'une analyse inclusive. Nous obtenons finalement le résultat suivant : _md = 0.513±0.006±0.008 ps^-1, la première erreur étant l'erreur statistique et la deuxième, l'erreur systématique.<br/><br/>The Belle experiment is located in the KEK research centre (Japan) and is primarily devoted to the study of CP violation in the B meson sector. Belle is placed on the KEKB collider, one of the two currently running "B-meson factories", which produce Banti-B pairs. KEKB has created more than 150 million pairs in total, a world record for this kind of colliders. This large sample allows very precise measurements in the physics of beauty mesons. The present analysis falls within the framework of these precise measurements. One of the most remarkable phenomena in high-energy physics is the ability of weak interactions to couple a neutral meson to its anti-meson. In this work, we study the coupling of neutral B with neutral anti-B meson, which induces an oscillation of frequency _md we can measure accurately. Besides the interest of this phenomenon itself, this measurement plays an important role in the quest for the origin of CP violation. The standard model of electro-weak interactions does not include CP violation in a fully satisfactory way. The search for yet unexplained physical phenomena is, therefore, the main motivation of the Belle collaboration. Many measurements of _md have previously been performed. The present work, however, leads to a precision on _md that was never reached before. This is the result of the excellent performance of KEKB, and of an original approach that allows to considerably reduce background contamination of pertinent events. This approach was already successfully used by other collaborations, in slightly different conditions as here. The method we employed consists in the partial reconstruction of one of the B mesons through the decay channel ___D*(D0_)l_l, where only the information on the lepton l and the pion _ are used. The information on the other B meson of the initial Banti-B pair is extracted from a single high-energy lepton. The available sample of Banti-B pairs thus does not suffer from large reductions due to complete reconstruction, nor does it suffer of high charged B meson background, as in inclusive analyses. We finally obtain the following result: _md = 0.513±0.006±0.008 ps^-1, where the first error is statistical, and the second, systematical.<br/><br/>De quoi la matière est-elle constituée ? Comment tient-elle ensemble ? Ce sont là les questions auxquelles la recherche en physique des hautes énergies tente de répondre. Cette recherche est conduite à deux niveaux en constante interaction. D?une part, des modèles théoriques sont élaborés pour tenter de comprendre et de décrire les observations. Ces dernières, d?autre part, sont réalisées au moyen de collisions à haute énergie de particules élémentaires. C?est ainsi que l?on a pu mettre en évidence l?existence de quatre forces fondamentales et de 24 constituants élémentaires, classés en « quarks » et « leptons ». Il s?agit là de l?une des plus belles réussites du modèle en usage aujourd?hui, appelé « Modèle Standard ». Il est une observation fondamentale que le Modèle Standard peine cependant à expliquer, c?est la disparition quasi complète de l?anti-matière (le « négatif » de la matière). Au niveau fondamental, cela doit correspondre à une asymétrie entre particules (constituants de la matière) et antiparticules (constituants de l?anti-matière). On l?appelle l?asymétrie (ou violation) CP. Bien qu?incluse dans le Modèle Standard, cette asymétrie n?est que partiellement prise en compte, semble-t-il. En outre, son origine est inconnue. D?intenses recherches sont donc aujourd?hui entreprises pour mettre en lumière cette asymétrie. L?expérience Belle, au Japon, en est une des pionnières. Belle étudie en effet les phénomènes physiques liés à une famille de particules appelées les « mésons B », dont on sait qu?elles sont liées de près à l?asymétrie CP. C?est dans le cadre de cette recherche que se place cette thèse. Nous avons étudié une propriété remarquable du méson B neutre : l?oscillation de ce méson avec son anti-méson. Cette particule est de se désintégrer pour donner l?antiparticule associée. Il est clair que cette oscillation est rattachée à l?asymétrie CP. Nous avons ici déterminé avec une précision encore inégalée la fréquence de cette oscillation. La méthode utilisée consiste à caractériser une paire de mésons B à l?aide de leur désintégration comprenant un lepton chacun. Une plus grande précision est obtenue en recherchant également une particule appelée le pion, et qui provient de la désintégration d?un des mésons. Outre l?intérêt de ce phénomène oscillatoire en lui-même, cette mesure permet d?affiner, directement ou indirectement, le Modèle Standard. Elle pourra aussi, à terme, aider à élucider le mystère de l?asymétrie entre matière et anti-matière.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of the present study was to identify Candida albicans transcription factors (TFs) involved in virulence. Although mice are considered the gold-standard model to study fungal virulence, mini-host infection models have been increasingly used. Here, barcoded TF mutants were first screened in mice by pools of strains and fungal burdens (FBs) quantified in kidneys. Mutants of unannotated genes which generated a kidney FB significantly different from that of wild-type were selected and individually examined in Galleria mellonella. In addition, mutants that could not be detected in mice were also tested in G. mellonella. Only 25% of these mutants displayed matching phenotypes in both hosts, highlighting a significant discrepancy between the two models. To address the basis of this difference (pool or host effects), a set of 19 mutants tested in G. mellonella were also injected individually into mice. Matching FB phenotypes were observed in 50% of the cases, highlighting the bias due to host effects. In contrast, 33.4% concordance was observed between pool and single strain infections in mice, thereby highlighting the bias introduced by the "pool effect." After filtering the results obtained from the two infection models, mutants for MBF1 and ZCF6 were selected. Independent marker-free mutants were subsequently tested in both hosts to validate previous results. The MBF1 mutant showed impaired infection in both models, while the ZCF6 mutant was only significant in mice infections. The two mutants showed no obvious in vitro phenotypes compared with the wild-type, indicating that these genes might be specifically involved in in vivo adapt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Selective embolization of the left-gastric artery (LGA) reduces levels of ghrelin and achieves significant short-term weight loss. However, embolization of the LGA would prevent the performance of bariatric procedures because the high-risk leakage area (gastroesophageal junction [GEJ]) would be devascularized. Aim. To assess an alternative vascular approach to the modulation of ghrelin levels and generate a blood flow manipulation, consequently increasing the vascular supply to the GEJ. Materials and methods. A total of 6 pigs underwent a laparoscopic clipping of the left gastroepiploic artery. Preoperative and postoperative CT angiographies were performed. Ghrelin levels were assessed perioperatively and then once per week for 3 weeks. Reactive oxygen species (ROS; expressed as ROS/mg of dry weight [DW]), mitochondria respiratory rate, and capillary lactates were assessed before and 1 hour after clipping (T0 and T1) and after 3 weeks of survival (T2), on seromuscular biopsies. A celiac trunk angiography was performed at 3 weeks. Results. Mean (±standard deviation) ghrelin levels were significantly reduced 1 hour after clipping (1902 ± 307.8 pg/mL vs 1084 ± 680.0; P = .04) and at 3 weeks (954.5 ± 473.2 pg/mL; P = .01). Mean ROS levels were statistically significantly decreased at the cardia at T2 when compared with T0 (0.018 ± 0.006 mg/DW vs 0.02957 ± 0.0096 mg/DW; P = .01) and T1 (0.0376 ± 0.008mg/DW; P = .007). Capillary lactates were significantly decreased after 3 weeks, and the mitochondria respiratory rate remained constant over time at the cardia and pylorus, showing significant regional differences. Conclusions. Manipulation of the gastric flow targeting the gastroepiploic arcade induces ghrelin reduction. An endovascular approach is currently under evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inhibition of vascular endothelial growth factor (VEGF) has become the standard of care for patients presenting with wet age-related macular degeneration. However, monthly intravitreal injections are required for optimal efficacy. We have previously shown that electroporation enabled ciliary muscle gene transfer results in sustained protein secretion into the vitreous for up to 9 months. Here, we evaluated the long-term efficacy of ciliary muscle gene transfer of three soluble VEGF receptor-1 (sFlt-1) variants in a rat model of laser-induced choroidal neovascularization (CNV). All three sFlt-1 variants significantly diminished vascular leakage and neovascularization as measured by fluorescein angiography (FA) and flatmount choroid at 3 weeks. FA and infracyanine angiography demonstrated that inhibition of CNV was maintained for up to 6 months after gene transfer of the two shortest sFlt-1 variants. Throughout, clinical efficacy was correlated with sustained VEGF neutralization in the ocular media. Interestingly, treatment with sFlt-1 induced a 50% downregulation of VEGF messenger RNA levels in the retinal pigment epithelium and the choroid. We demonstrate for the first time that non-viral gene transfer can achieve a long-term reduction of VEGF levels and efficacy in the treatment of CNV.Gene Therapy advance online publication, 27 June 2013; doi:10.1038/gt.2013.36.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The AMS 800 urinary control system is the gold standard for the treatment of urinary incontinence due to sphincter insufficiency. Despite excellent functional outcome and latest technological improvements, the revision rate remains significant. To overcome the shortcomings of the current device, we developed a modern electromechanical artificial urinary sphincter. The results demonstrated that this new sphincter is effective and well tolerated up to 3 months. This preliminary study represents a first step in the clinical application of novel technologies and an alternative compression mechanism to the urethra. OBJECTIVES: To evaluate the effectiveness in continence achievement of a new electromechanical artificial urinary sphincter (emAUS) in an animal model. To assess urethral response and animal general response to short-term and mid-term activation of the emAUS. MATERIALS AND METHODS: The principle of the emAUS is electromechanical induction of alternating compression of successive segments of the urethra by a series of cuffs activated by artificial muscles. Between February 2009 and May 2010 the emAUS was implanted in 17 sheep divided into three groups. The first phase aimed to measure bladder leak point pressure during the activation of the device. The second and third phases aimed to assess tissue response to the presence of the device after 2-9 weeks and after 3 months respectively. Histopathological and immunohistochemistry evaluation of the urethra was performed. RESULTS: Bladder leak point pressure was measured at levels between 1091 ± 30.6 cmH2 O and 1244.1 ± 99 cmH2 O (mean ± standard deviation) depending on the number of cuffs used. At gross examination, the explanted urethra showed no sign of infection, atrophy or stricture. On microscopic examination no significant difference in structure was found between urethral structure surrounded by a cuff and control urethra. In the peripheral tissues, the implanted material elicited a chronic foreign body reaction. Apart from one case, specimens did not show significant presence of lymphocytes, polymorphonuclear leucocytes, necrosis or cell degeneration. Immunohistochemistry confirmed the absence of macrophages in the samples. CONCLUSIONS: This animal study shows that the emAUS can provide continence. This new electronic controlled sequential alternating compression mechanism can avoid damage to urethral vascularity, at least up to 3 months after implantation. After this positive proof of concept, long-term studies are needed before clinical application could be considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.